Test Report: Docker_Linux_containerd_arm64 12230

                    
                      1c76ff5cea01605c2d985c010644edf1e689d34b:2021-08-13:19970
                    
                

Test fail (14/252)

x
+
TestAddons/parallel/Registry (285.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:284: registry stabilized in 25.000978ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:343: "registry-5f6m6" [b842920d-03bf-4426-9765-5fb36b90afb9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
helpers_test.go:343: "registry-5f6m6" [b842920d-03bf-4426-9765-5fb36b90afb9] Running
addons_test.go:286: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 1m1.011844862s
addons_test.go:289: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:343: "registry-proxy-dg8n7" [3af763b1-1ec2-4a5b-b1c6-9f9a9c21d031] Running / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
addons_test.go:289: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.007056227s
addons_test.go:294: (dbg) Run:  kubectl --context addons-20210813032940-2022292 delete po -l run=registry-test --now
addons_test.go:299: (dbg) Run:  kubectl --context addons-20210813032940-2022292 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:299: (dbg) Non-zero exit: kubectl --context addons-20210813032940-2022292 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.136995335s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:301: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-20210813032940-2022292 run --rm registry-test --restart=Never --image=busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:305: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-20210813032940-2022292 ip
2021/08/13 03:39:08 [DEBUG] GET http://192.168.49.2:5000
2021/08/13 03:39:08 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/13 03:39:08 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2021/08/13 03:39:09 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/13 03:39:09 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2021/08/13 03:39:11 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/13 03:39:11 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2021/08/13 03:39:15 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/13 03:39:15 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2021/08/13 03:39:23 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/13 03:39:23 [DEBUG] GET http://192.168.49.2:5000
2021/08/13 03:39:23 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/13 03:39:23 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2021/08/13 03:39:24 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/13 03:39:24 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2021/08/13 03:39:26 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/13 03:39:26 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2021/08/13 03:39:30 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/13 03:39:30 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2021/08/13 03:39:38 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/13 03:39:38 [DEBUG] GET http://192.168.49.2:5000
2021/08/13 03:39:38 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/13 03:39:38 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2021/08/13 03:39:39 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/13 03:39:39 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2021/08/13 03:39:41 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/13 03:39:41 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2021/08/13 03:39:45 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/13 03:39:45 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2021/08/13 03:39:53 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/13 03:39:54 [DEBUG] GET http://192.168.49.2:5000
2021/08/13 03:39:54 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/13 03:39:54 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2021/08/13 03:39:55 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/13 03:39:55 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2021/08/13 03:39:57 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/13 03:39:57 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2021/08/13 03:40:01 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/13 03:40:01 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2021/08/13 03:40:09 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/13 03:40:11 [DEBUG] GET http://192.168.49.2:5000
2021/08/13 03:40:11 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/13 03:40:11 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2021/08/13 03:40:12 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/13 03:40:12 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2021/08/13 03:40:14 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/13 03:40:14 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2021/08/13 03:40:18 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/13 03:40:18 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2021/08/13 03:40:26 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/13 03:40:30 [DEBUG] GET http://192.168.49.2:5000
2021/08/13 03:40:30 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/13 03:40:30 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2021/08/13 03:40:31 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/13 03:40:31 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2021/08/13 03:40:33 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/13 03:40:33 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2021/08/13 03:40:37 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/13 03:40:37 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2021/08/13 03:40:45 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/13 03:40:51 [DEBUG] GET http://192.168.49.2:5000
2021/08/13 03:40:51 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/13 03:40:51 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2021/08/13 03:40:52 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/13 03:40:52 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2021/08/13 03:40:54 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/13 03:40:54 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2021/08/13 03:40:58 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/13 03:40:58 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2021/08/13 03:41:06 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/13 03:41:10 [DEBUG] GET http://192.168.49.2:5000
2021/08/13 03:41:10 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/13 03:41:10 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2021/08/13 03:41:11 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/13 03:41:11 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2021/08/13 03:41:13 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/13 03:41:13 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2021/08/13 03:41:17 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/13 03:41:17 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2021/08/13 03:41:25 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/13 03:41:29 [DEBUG] GET http://192.168.49.2:5000
2021/08/13 03:41:29 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/13 03:41:29 [DEBUG] GET http://192.168.49.2:5000: retrying in 1s (4 left)
2021/08/13 03:41:30 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/13 03:41:30 [DEBUG] GET http://192.168.49.2:5000: retrying in 2s (3 left)
2021/08/13 03:41:32 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/13 03:41:32 [DEBUG] GET http://192.168.49.2:5000: retrying in 4s (2 left)
2021/08/13 03:41:36 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
2021/08/13 03:41:36 [DEBUG] GET http://192.168.49.2:5000: retrying in 8s (1 left)
2021/08/13 03:41:44 [ERR] GET http://192.168.49.2:5000 request failed: Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
addons_test.go:339: failed to check external access to http://192.168.49.2:5000: GET http://192.168.49.2:5000 giving up after 5 attempt(s): Get "http://192.168.49.2:5000": dial tcp 192.168.49.2:5000: connect: connection refused
addons_test.go:342: (dbg) Run:  out/minikube-linux-arm64 -p addons-20210813032940-2022292 addons disable registry --alsologtostderr -v=1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect addons-20210813032940-2022292
helpers_test.go:236: (dbg) docker inspect addons-20210813032940-2022292:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5eb115611cc3c203dd18e5bcf8bd911508a396ad77a4442055cb4b3d330b1212",
	        "Created": "2021-08-13T03:29:46.326395701Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2023217,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T03:29:46.770105005Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ba5ae658d5b3f017bdb597cc46a1912d5eed54239e31b777788d204fdcbc4445",
	        "ResolvConfPath": "/var/lib/docker/containers/5eb115611cc3c203dd18e5bcf8bd911508a396ad77a4442055cb4b3d330b1212/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5eb115611cc3c203dd18e5bcf8bd911508a396ad77a4442055cb4b3d330b1212/hostname",
	        "HostsPath": "/var/lib/docker/containers/5eb115611cc3c203dd18e5bcf8bd911508a396ad77a4442055cb4b3d330b1212/hosts",
	        "LogPath": "/var/lib/docker/containers/5eb115611cc3c203dd18e5bcf8bd911508a396ad77a4442055cb4b3d330b1212/5eb115611cc3c203dd18e5bcf8bd911508a396ad77a4442055cb4b3d330b1212-json.log",
	        "Name": "/addons-20210813032940-2022292",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-20210813032940-2022292:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-20210813032940-2022292",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a84c1c1faf655e022ea96b0cbfa5780f6b48eafe43cd340b60f563833149b80f-init/diff:/var/lib/docker/overlay2/7eab3572859d93b266e01c53f7180a9b812a9352d6d9de9a250b7c08853896bd/diff:/var/lib/docker/overlay2/735c75d71cfc18e90e119a4cbda44b5328f80ee140097a56e4b8d56d1d73296a/diff:/var/lib/docker/overlay2/a3e21a33abd0bc635f6c01d5065127b0c6ae8648e27621bc2af8480371e0e000/diff:/var/lib/docker/overlay2/81573b84b43b2908098dbf411f4127aea8745e37aa0ee2f3bcf32f2378aef923/diff:/var/lib/docker/overlay2/633406c91e496c6ee40740050d85641e9c1f2bf787ba64a82f892910362ceeb3/diff:/var/lib/docker/overlay2/deb8d862aaef5e3fc2ec77b3f1839b07c4f6998399f4f111cd38226c004f70b0/diff:/var/lib/docker/overlay2/57b3638e691861d96d431a19402174c1139d2ff0280c08c71a81a8fcf9390e79/diff:/var/lib/docker/overlay2/6e43f99fe3b29b8ef7a4f065a75009878de2e2c2f4298c42eaf887f7602bbc6e/diff:/var/lib/docker/overlay2/cf9d28926b8190588c7af7d8b25156aee75f2abd04071b6e2a0a0fbf2e143dee/diff:/var/lib/docker/overlay2/6aa317
1af6f20f0682732cc4019152e4d5b0846e1ebda0a27c41c772e1cde011/diff:/var/lib/docker/overlay2/868a81f13eb2fedd1a1cb40eaf1c94ba3507a2ce88acff3fbbe9324b52a4b161/diff:/var/lib/docker/overlay2/162214348b4cea5219287565f6d7e0dd459b26bcc50e3db36cf72c667b547528/diff:/var/lib/docker/overlay2/9dbad12bae2f76b71152f7b4515e05d4b998ecec3e6ee896abcec7a80dcd2bea/diff:/var/lib/docker/overlay2/6cabd7857a22f00b0aba07331d6ccd89db9770531c0aa2f6fe5dd0f2cfdf0571/diff:/var/lib/docker/overlay2/d37830ed714a3f12f75bdb0787ab6a0b95fa84f6f2ba7cfce7c0088eae46490b/diff:/var/lib/docker/overlay2/d1f89b0ec8b42bfa6422a1c60a32bf10de45dc549f369f5a7cab728a58edc9f6/diff:/var/lib/docker/overlay2/23f19b760877b914dfe08fbc57f540b6d7a01f94b06b51f27fd6b0307358f0c7/diff:/var/lib/docker/overlay2/a5a77daab231d8d9f6bccde006a207ac55eba70f1221af6acf584668b6732875/diff:/var/lib/docker/overlay2/8d8735d77324b45253a6a19c95ccc69efbb75db0817acd436b005907edf2edcf/diff:/var/lib/docker/overlay2/a7baa651956578e18a5f1b4650eb08a3fde481426f62eca9488d43b89516af4a/diff:/var/lib/d
ocker/overlay2/bce892b3b410ea92f44fedfdc2ee2fa21cfd1fb09da0f3f710f4127436dee1da/diff:/var/lib/docker/overlay2/5fd9b1d93e98bad37f9fb94802b81ef99b54fe312c33006d1efe3e0a4d018218/diff:/var/lib/docker/overlay2/4fa01f36ea63b13ec54182dc384831ff6ba4af27e4e0af13a679984676a4444c/diff:/var/lib/docker/overlay2/63fcd873b6d3120225858a1625cd3b62111df43d3ee0a5fc67083b6912d73a0b/diff:/var/lib/docker/overlay2/2a89e5c9c4b59c0940b10344a4b9bcc69aa162cbdaff6b115404618622a39bf7/diff:/var/lib/docker/overlay2/f08c2886bdfdaf347184cfc06f22457c321676b0bed884791f82f2e3871b640d/diff:/var/lib/docker/overlay2/2f28445803213dc1a6a1b2c687d83ad65dbc018184c663d1f55aa1e8ba26c71c/diff:/var/lib/docker/overlay2/b380dc70af7cf929aaac54e718efbf169fc3994906ab4c15442ddcb1b9973044/diff:/var/lib/docker/overlay2/78fc6ffaa10b2fbce9cefb40ac36aad6ac1d9d90eb27a39dc3316a9c7925b6e9/diff:/var/lib/docker/overlay2/14ee7ddeeb1d52f6956390ca75ff1c67feb8f463a7590e4e021a61251ed42ace/diff:/var/lib/docker/overlay2/99b8cd45c95f310665f0002ff1e8a6932c40fe872e3daa332d0b6f0cc41
f09f7/diff:/var/lib/docker/overlay2/efc742edfe683b14be0e72910049a54bf7b14ac798aa52a5e0f2839e1192b382/diff:/var/lib/docker/overlay2/d038d2ed6aff52af29d17eeb4de8728511045dbe49430059212877f1ae82f24b/diff:/var/lib/docker/overlay2/413fdf0e0da33dff95cacfd58fb4d7eb00b56c1777905c5671426293e1236f21/diff:/var/lib/docker/overlay2/88c5007e3d3e219079cebf81af5c22026c5923305801eacb5affe25b84906e7f/diff:/var/lib/docker/overlay2/e989119af87381d107830638584e78f0bf616a31754948372e177ffcdfb821fb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a84c1c1faf655e022ea96b0cbfa5780f6b48eafe43cd340b60f563833149b80f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a84c1c1faf655e022ea96b0cbfa5780f6b48eafe43cd340b60f563833149b80f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a84c1c1faf655e022ea96b0cbfa5780f6b48eafe43cd340b60f563833149b80f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-20210813032940-2022292",
	                "Source": "/var/lib/docker/volumes/addons-20210813032940-2022292/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-20210813032940-2022292",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-20210813032940-2022292",
	                "name.minikube.sigs.k8s.io": "addons-20210813032940-2022292",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "63cc91236c7d0216218ed6a99d16bf5a5214d1f2a29fe790b354ed1c3d95269a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50803"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50802"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50799"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50801"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50800"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/63cc91236c7d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-20210813032940-2022292": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5eb115611cc3",
	                        "addons-20210813032940-2022292"
	                    ],
	                    "NetworkID": "1437cc990d89cd4c2f4b86b77c1e915486671cda7aa7c792c2322229d169e87c",
	                    "EndpointID": "96d417aa7e8c5077c1e5d843cea177ffb9c204a83528a1fa41771ba12d8e11cc",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-20210813032940-2022292 -n addons-20210813032940-2022292
helpers_test.go:245: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p addons-20210813032940-2022292 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p addons-20210813032940-2022292 logs -n 25: (1.205226433s)
helpers_test.go:253: TestAddons/parallel/Registry logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                  Args                  |                Profile                 |  User   | Version |          Start Time           |           End Time            |
	|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | --all                                  | download-only-20210813032822-2022292   | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:29:26 UTC | Fri, 13 Aug 2021 03:29:26 UTC |
	| delete  | -p                                     | download-only-20210813032822-2022292   | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:29:26 UTC | Fri, 13 Aug 2021 03:29:26 UTC |
	|         | download-only-20210813032822-2022292   |                                        |         |         |                               |                               |
	| delete  | -p                                     | download-only-20210813032822-2022292   | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:29:26 UTC | Fri, 13 Aug 2021 03:29:26 UTC |
	|         | download-only-20210813032822-2022292   |                                        |         |         |                               |                               |
	| delete  | -p                                     | download-docker-20210813032926-2022292 | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:29:40 UTC | Fri, 13 Aug 2021 03:29:40 UTC |
	|         | download-docker-20210813032926-2022292 |                                        |         |         |                               |                               |
	| start   | -p                                     | addons-20210813032940-2022292          | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:29:40 UTC | Fri, 13 Aug 2021 03:37:01 UTC |
	|         | addons-20210813032940-2022292          |                                        |         |         |                               |                               |
	|         | --wait=true --memory=4000              |                                        |         |         |                               |                               |
	|         | --alsologtostderr                      |                                        |         |         |                               |                               |
	|         | --addons=registry                      |                                        |         |         |                               |                               |
	|         | --addons=metrics-server                |                                        |         |         |                               |                               |
	|         | --addons=olm                           |                                        |         |         |                               |                               |
	|         | --addons=volumesnapshots               |                                        |         |         |                               |                               |
	|         | --addons=csi-hostpath-driver           |                                        |         |         |                               |                               |
	|         | --driver=docker                        |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	|         | --addons=ingress                       |                                        |         |         |                               |                               |
	|         | --addons=gcp-auth                      |                                        |         |         |                               |                               |
	| -p      | addons-20210813032940-2022292          | addons-20210813032940-2022292          | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:39:08 UTC | Fri, 13 Aug 2021 03:39:08 UTC |
	|         | ip                                     |                                        |         |         |                               |                               |
	| -p      | addons-20210813032940-2022292          | addons-20210813032940-2022292          | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:41:44 UTC | Fri, 13 Aug 2021 03:41:44 UTC |
	|         | addons disable registry                |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1                 |                                        |         |         |                               |                               |
	|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 03:29:40
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.16.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 03:29:40.904577 2022781 out.go:298] Setting OutFile to fd 1 ...
	I0813 03:29:40.904648 2022781 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 03:29:40.904652 2022781 out.go:311] Setting ErrFile to fd 2...
	I0813 03:29:40.904656 2022781 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 03:29:40.904776 2022781 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0813 03:29:40.905029 2022781 out.go:305] Setting JSON to false
	I0813 03:29:40.905896 2022781 start.go:111] hostinfo: {"hostname":"ip-172-31-30-239","uptime":47525,"bootTime":1628777856,"procs":373,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0813 03:29:40.905961 2022781 start.go:121] virtualization:  
	I0813 03:29:40.908162 2022781 out.go:177] * [addons-20210813032940-2022292] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0813 03:29:40.910717 2022781 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 03:29:40.909322 2022781 notify.go:169] Checking for updates...
	I0813 03:29:40.912282 2022781 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 03:29:40.913989 2022781 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0813 03:29:40.915709 2022781 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0813 03:29:40.915862 2022781 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 03:29:40.950762 2022781 docker.go:132] docker version: linux-20.10.8
	I0813 03:29:40.950850 2022781 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 03:29:41.048943 2022781 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:19 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-13 03:29:40.991652348 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0813 03:29:41.049041 2022781 docker.go:244] overlay module found
	I0813 03:29:41.051223 2022781 out.go:177] * Using the docker driver based on user configuration
	I0813 03:29:41.051242 2022781 start.go:278] selected driver: docker
	I0813 03:29:41.051247 2022781 start.go:751] validating driver "docker" against <nil>
	I0813 03:29:41.051260 2022781 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 03:29:41.051298 2022781 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 03:29:41.051322 2022781 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 03:29:41.053106 2022781 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 03:29:41.053411 2022781 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 03:29:41.128846 2022781 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:19 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-13 03:29:41.078382939 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0813 03:29:41.128961 2022781 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 03:29:41.129117 2022781 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 03:29:41.129138 2022781 cni.go:93] Creating CNI manager for ""
	I0813 03:29:41.129145 2022781 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 03:29:41.129158 2022781 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0813 03:29:41.129163 2022781 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0813 03:29:41.129174 2022781 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0813 03:29:41.129183 2022781 start_flags.go:277] config:
	{Name:addons-20210813032940-2022292 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210813032940-2022292 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 03:29:41.131398 2022781 out.go:177] * Starting control plane node addons-20210813032940-2022292 in cluster addons-20210813032940-2022292
	I0813 03:29:41.131428 2022781 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0813 03:29:41.133269 2022781 out.go:177] * Pulling base image ...
	I0813 03:29:41.133290 2022781 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 03:29:41.133320 2022781 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4
	I0813 03:29:41.133338 2022781 cache.go:56] Caching tarball of preloaded images
	I0813 03:29:41.133463 2022781 preload.go:173] Found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0813 03:29:41.133484 2022781 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0813 03:29:41.133759 2022781 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/config.json ...
	I0813 03:29:41.133785 2022781 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/config.json: {Name:mk0d1eb11345f673782e67cee6dd1983fc2ade38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 03:29:41.133935 2022781 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0813 03:29:41.165619 2022781 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0813 03:29:41.165641 2022781 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0813 03:29:41.165654 2022781 cache.go:205] Successfully downloaded all kic artifacts
	I0813 03:29:41.165678 2022781 start.go:313] acquiring machines lock for addons-20210813032940-2022292: {Name:mk4b9c97c204520a15a5934e9d971902370f4475 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 03:29:41.165798 2022781 start.go:317] acquired machines lock for "addons-20210813032940-2022292" in 99.224µs
	I0813 03:29:41.165826 2022781 start.go:89] Provisioning new machine with config: &{Name:addons-20210813032940-2022292 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210813032940-2022292 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 03:29:41.165896 2022781 start.go:126] createHost starting for "" (driver="docker")
	I0813 03:29:41.168439 2022781 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0813 03:29:41.168667 2022781 start.go:160] libmachine.API.Create for "addons-20210813032940-2022292" (driver="docker")
	I0813 03:29:41.168697 2022781 client.go:168] LocalClient.Create starting
	I0813 03:29:41.168779 2022781 main.go:130] libmachine: Creating CA: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem
	I0813 03:29:41.457503 2022781 main.go:130] libmachine: Creating client certificate: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem
	I0813 03:29:42.069244 2022781 cli_runner.go:115] Run: docker network inspect addons-20210813032940-2022292 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0813 03:29:42.096969 2022781 cli_runner.go:162] docker network inspect addons-20210813032940-2022292 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0813 03:29:42.097037 2022781 network_create.go:255] running [docker network inspect addons-20210813032940-2022292] to gather additional debugging logs...
	I0813 03:29:42.097062 2022781 cli_runner.go:115] Run: docker network inspect addons-20210813032940-2022292
	W0813 03:29:42.123327 2022781 cli_runner.go:162] docker network inspect addons-20210813032940-2022292 returned with exit code 1
	I0813 03:29:42.123351 2022781 network_create.go:258] error running [docker network inspect addons-20210813032940-2022292]: docker network inspect addons-20210813032940-2022292: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: addons-20210813032940-2022292
	I0813 03:29:42.123365 2022781 network_create.go:260] output of [docker network inspect addons-20210813032940-2022292]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: addons-20210813032940-2022292
	
	** /stderr **
	I0813 03:29:42.123423 2022781 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 03:29:42.150055 2022781 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0x4000892220] misses:0}
	I0813 03:29:42.150105 2022781 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 03:29:42.150124 2022781 network_create.go:106] attempt to create docker network addons-20210813032940-2022292 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0813 03:29:42.150170 2022781 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20210813032940-2022292
	I0813 03:29:42.365897 2022781 network_create.go:90] docker network addons-20210813032940-2022292 192.168.49.0/24 created
	I0813 03:29:42.365924 2022781 kic.go:106] calculated static IP "192.168.49.2" for the "addons-20210813032940-2022292" container
	I0813 03:29:42.365989 2022781 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0813 03:29:42.392753 2022781 cli_runner.go:115] Run: docker volume create addons-20210813032940-2022292 --label name.minikube.sigs.k8s.io=addons-20210813032940-2022292 --label created_by.minikube.sigs.k8s.io=true
	I0813 03:29:42.465525 2022781 oci.go:102] Successfully created a docker volume addons-20210813032940-2022292
	I0813 03:29:42.465589 2022781 cli_runner.go:115] Run: docker run --rm --name addons-20210813032940-2022292-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210813032940-2022292 --entrypoint /usr/bin/test -v addons-20210813032940-2022292:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
	I0813 03:29:46.145957 2022781 cli_runner.go:168] Completed: docker run --rm --name addons-20210813032940-2022292-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210813032940-2022292 --entrypoint /usr/bin/test -v addons-20210813032940-2022292:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib: (3.680326113s)
	I0813 03:29:46.145978 2022781 oci.go:106] Successfully prepared a docker volume addons-20210813032940-2022292
	W0813 03:29:46.146006 2022781 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0813 03:29:46.146013 2022781 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0813 03:29:46.146068 2022781 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0813 03:29:46.146285 2022781 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 03:29:46.146304 2022781 kic.go:179] Starting extracting preloaded images to volume ...
	I0813 03:29:46.146345 2022781 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-20210813032940-2022292:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0813 03:29:46.276252 2022781 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-20210813032940-2022292 --name addons-20210813032940-2022292 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210813032940-2022292 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-20210813032940-2022292 --network addons-20210813032940-2022292 --ip 192.168.49.2 --volume addons-20210813032940-2022292:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0813 03:29:46.777489 2022781 cli_runner.go:115] Run: docker container inspect addons-20210813032940-2022292 --format={{.State.Running}}
	I0813 03:29:46.831248 2022781 cli_runner.go:115] Run: docker container inspect addons-20210813032940-2022292 --format={{.State.Status}}
	I0813 03:29:46.876305 2022781 cli_runner.go:115] Run: docker exec addons-20210813032940-2022292 stat /var/lib/dpkg/alternatives/iptables
	I0813 03:29:46.966276 2022781 oci.go:278] the created container "addons-20210813032940-2022292" has a running status.
	I0813 03:29:46.966302 2022781 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa...
	I0813 03:29:48.086545 2022781 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0813 03:30:00.277285 2022781 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-20210813032940-2022292:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: (14.130901108s)
	I0813 03:30:00.277310 2022781 kic.go:188] duration metric: took 14.131004 seconds to extract preloaded images to volume
	I0813 03:30:00.350109 2022781 cli_runner.go:115] Run: docker container inspect addons-20210813032940-2022292 --format={{.State.Status}}
	I0813 03:30:00.387273 2022781 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0813 03:30:00.387290 2022781 kic_runner.go:115] Args: [docker exec --privileged addons-20210813032940-2022292 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0813 03:30:00.480821 2022781 cli_runner.go:115] Run: docker container inspect addons-20210813032940-2022292 --format={{.State.Status}}
	I0813 03:30:00.523697 2022781 machine.go:88] provisioning docker machine ...
	I0813 03:30:00.523726 2022781 ubuntu.go:169] provisioning hostname "addons-20210813032940-2022292"
	I0813 03:30:00.523781 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:00.558122 2022781 main.go:130] libmachine: Using SSH client type: native
	I0813 03:30:00.558295 2022781 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50803 <nil> <nil>}
	I0813 03:30:00.558308 2022781 main.go:130] libmachine: About to run SSH command:
	sudo hostname addons-20210813032940-2022292 && echo "addons-20210813032940-2022292" | sudo tee /etc/hostname
	I0813 03:30:00.689627 2022781 main.go:130] libmachine: SSH cmd err, output: <nil>: addons-20210813032940-2022292
	
	I0813 03:30:00.689693 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:00.721994 2022781 main.go:130] libmachine: Using SSH client type: native
	I0813 03:30:00.722165 2022781 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50803 <nil> <nil>}
	I0813 03:30:00.722192 2022781 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-20210813032940-2022292' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-20210813032940-2022292/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-20210813032940-2022292' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 03:30:00.836190 2022781 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 03:30:00.836215 2022781 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e6
89d34b/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube}
	I0813 03:30:00.836235 2022781 ubuntu.go:177] setting up certificates
	I0813 03:30:00.836244 2022781 provision.go:83] configureAuth start
	I0813 03:30:00.836296 2022781 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210813032940-2022292
	I0813 03:30:00.866297 2022781 provision.go:137] copyHostCerts
	I0813 03:30:00.866361 2022781 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem (1078 bytes)
	I0813 03:30:00.866440 2022781 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem (1123 bytes)
	I0813 03:30:00.866493 2022781 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem (1679 bytes)
	I0813 03:30:00.866533 2022781 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem org=jenkins.addons-20210813032940-2022292 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-20210813032940-2022292]
	I0813 03:30:01.389006 2022781 provision.go:171] copyRemoteCerts
	I0813 03:30:01.389079 2022781 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 03:30:01.389121 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:01.419000 2022781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50803 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa Username:docker}
	I0813 03:30:01.502519 2022781 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 03:30:01.520038 2022781 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
	I0813 03:30:01.534523 2022781 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0813 03:30:01.548773 2022781 provision.go:86] duration metric: configureAuth took 712.517206ms
	I0813 03:30:01.548788 2022781 ubuntu.go:193] setting minikube options for container-runtime
	I0813 03:30:01.548937 2022781 machine.go:91] provisioned docker machine in 1.0252225s
	I0813 03:30:01.548943 2022781 client.go:171] LocalClient.Create took 20.380236744s
	I0813 03:30:01.548963 2022781 start.go:168] duration metric: libmachine.API.Create for "addons-20210813032940-2022292" took 20.380294582s
	I0813 03:30:01.548971 2022781 start.go:267] post-start starting for "addons-20210813032940-2022292" (driver="docker")
	I0813 03:30:01.548975 2022781 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 03:30:01.549015 2022781 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 03:30:01.549053 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:01.580251 2022781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50803 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa Username:docker}
	I0813 03:30:01.662251 2022781 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 03:30:01.664643 2022781 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 03:30:01.664666 2022781 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 03:30:01.664677 2022781 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 03:30:01.664684 2022781 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 03:30:01.664694 2022781 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/addons for local assets ...
	I0813 03:30:01.664745 2022781 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files for local assets ...
	I0813 03:30:01.664771 2022781 start.go:270] post-start completed in 115.793872ms
	I0813 03:30:01.665039 2022781 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210813032940-2022292
	I0813 03:30:01.693800 2022781 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/config.json ...
	I0813 03:30:01.694005 2022781 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 03:30:01.694053 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:01.721552 2022781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50803 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa Username:docker}
	I0813 03:30:01.801607 2022781 start.go:129] duration metric: createHost completed in 20.635699035s
	I0813 03:30:01.801629 2022781 start.go:80] releasing machines lock for "addons-20210813032940-2022292", held for 20.635816952s
	I0813 03:30:01.801697 2022781 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210813032940-2022292
	I0813 03:30:01.830486 2022781 ssh_runner.go:149] Run: systemctl --version
	I0813 03:30:01.830532 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:01.830558 2022781 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 03:30:01.830610 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:01.866554 2022781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50803 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa Username:docker}
	I0813 03:30:01.870429 2022781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50803 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa Username:docker}
	I0813 03:30:01.952247 2022781 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0813 03:30:02.115786 2022781 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0813 03:30:02.123997 2022781 docker.go:153] disabling docker service ...
	I0813 03:30:02.124041 2022781 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 03:30:02.145698 2022781 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 03:30:02.154128 2022781 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 03:30:02.230172 2022781 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 03:30:02.309742 2022781 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 03:30:02.317886 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 03:30:02.328545 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0813 03:30:02.342323 2022781 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 03:30:02.348596 2022781 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 03:30:02.353961 2022781 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 03:30:02.428966 2022781 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 03:30:02.561206 2022781 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0813 03:30:02.561311 2022781 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 03:30:02.564783 2022781 start.go:417] Will wait 60s for crictl version
	I0813 03:30:02.564854 2022781 ssh_runner.go:149] Run: sudo crictl version
	I0813 03:30:02.628354 2022781 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-13T03:30:02Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0813 03:30:13.675200 2022781 ssh_runner.go:149] Run: sudo crictl version
	I0813 03:30:13.709613 2022781 start.go:426] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.6
	RuntimeApiVersion:  v1alpha2
	I0813 03:30:13.709705 2022781 ssh_runner.go:149] Run: containerd --version
	I0813 03:30:13.733653 2022781 ssh_runner.go:149] Run: containerd --version
	I0813 03:30:13.757722 2022781 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.6 ...
	I0813 03:30:13.757794 2022781 cli_runner.go:115] Run: docker network inspect addons-20210813032940-2022292 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 03:30:13.786942 2022781 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0813 03:30:13.789852 2022781 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 03:30:13.798329 2022781 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 03:30:13.798392 2022781 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 03:30:13.822356 2022781 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 03:30:13.822377 2022781 containerd.go:517] Images already preloaded, skipping extraction
	I0813 03:30:13.822420 2022781 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 03:30:13.844103 2022781 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 03:30:13.844124 2022781 cache_images.go:74] Images are preloaded, skipping loading
	I0813 03:30:13.844175 2022781 ssh_runner.go:149] Run: sudo crictl info
	I0813 03:30:13.867458 2022781 cni.go:93] Creating CNI manager for ""
	I0813 03:30:13.867480 2022781 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 03:30:13.867493 2022781 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 03:30:13.867528 2022781 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-20210813032940-2022292 NodeName:addons-20210813032940-2022292 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFil
e:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 03:30:13.867709 2022781 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "addons-20210813032940-2022292"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 03:30:13.867799 2022781 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-20210813032940-2022292 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:addons-20210813032940-2022292 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 03:30:13.867861 2022781 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 03:30:13.874164 2022781 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 03:30:13.874220 2022781 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 03:30:13.880082 2022781 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (574 bytes)
	I0813 03:30:13.891242 2022781 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 03:30:13.902573 2022781 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2079 bytes)
	I0813 03:30:13.913737 2022781 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0813 03:30:13.916383 2022781 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 03:30:13.924200 2022781 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292 for IP: 192.168.49.2
	I0813 03:30:13.924238 2022781 certs.go:183] generating minikubeCA CA: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key
	I0813 03:30:14.303335 2022781 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt ...
	I0813 03:30:14.303366 2022781 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt: {Name:mk3901a19599d51a2d50c48585ff3f7192ba4433 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 03:30:14.303553 2022781 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key ...
	I0813 03:30:14.303570 2022781 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key: {Name:mk845cb200e03c80833445af29652075ca29c5ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 03:30:14.303661 2022781 certs.go:183] generating proxyClientCA CA: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key
	I0813 03:30:14.625439 2022781 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.crt ...
	I0813 03:30:14.625463 2022781 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.crt: {Name:mk50086ce36a18e239ef358ebe31b06ec58a54a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 03:30:14.625614 2022781 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key ...
	I0813 03:30:14.625629 2022781 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key: {Name:mkcd9f75f5685763d3008dae66cb562ca8ff349f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 03:30:14.625754 2022781 certs.go:294] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.key
	I0813 03:30:14.625769 2022781 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.crt with IP's: []
	I0813 03:30:14.981494 2022781 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.crt ...
	I0813 03:30:14.981520 2022781 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.crt: {Name:mk67389ffe06e3642f68dcb5d06f25c4a4286db0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 03:30:14.981694 2022781 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.key ...
	I0813 03:30:14.981709 2022781 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.key: {Name:mk98a53e6092aad61eaf9907276fc969c6b86e98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 03:30:14.981803 2022781 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/apiserver.key.dd3b5fb2
	I0813 03:30:14.981815 2022781 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 03:30:15.445439 2022781 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/apiserver.crt.dd3b5fb2 ...
	I0813 03:30:15.445467 2022781 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/apiserver.crt.dd3b5fb2: {Name:mk68008aff00f28fd78f3516c58a44d15f90967b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 03:30:15.445636 2022781 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/apiserver.key.dd3b5fb2 ...
	I0813 03:30:15.445650 2022781 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/apiserver.key.dd3b5fb2: {Name:mk75a5de72872e71c9f625f9410c2e8267bb030b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 03:30:15.445738 2022781 certs.go:305] copying /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/apiserver.crt
	I0813 03:30:15.445794 2022781 certs.go:309] copying /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/apiserver.key
	I0813 03:30:15.445841 2022781 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/proxy-client.key
	I0813 03:30:15.445852 2022781 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/proxy-client.crt with IP's: []
	I0813 03:30:16.134694 2022781 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/proxy-client.crt ...
	I0813 03:30:16.134726 2022781 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/proxy-client.crt: {Name:mkc9f3f094f59bf4cae95593974525020ed0791c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 03:30:16.134902 2022781 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/proxy-client.key ...
	I0813 03:30:16.134917 2022781 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/proxy-client.key: {Name:mk3f97104a527dd489a07fc16ea52fabc4e3c427 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 03:30:16.135088 2022781 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem (1675 bytes)
	I0813 03:30:16.135130 2022781 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem (1078 bytes)
	I0813 03:30:16.135160 2022781 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem (1123 bytes)
	I0813 03:30:16.135186 2022781 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem (1679 bytes)
	I0813 03:30:16.137695 2022781 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 03:30:16.153617 2022781 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 03:30:16.168774 2022781 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 03:30:16.183608 2022781 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0813 03:30:16.198875 2022781 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 03:30:16.214461 2022781 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 03:30:16.229908 2022781 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 03:30:16.245519 2022781 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0813 03:30:16.261024 2022781 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 03:30:16.276150 2022781 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 03:30:16.287817 2022781 ssh_runner.go:149] Run: openssl version
	I0813 03:30:16.292417 2022781 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 03:30:16.298969 2022781 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 03:30:16.301737 2022781 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 13 03:30 /usr/share/ca-certificates/minikubeCA.pem
	I0813 03:30:16.301792 2022781 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 03:30:16.306354 2022781 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 03:30:16.312798 2022781 kubeadm.go:390] StartCluster: {Name:addons-20210813032940-2022292 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210813032940-2022292 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 03:30:16.312962 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0813 03:30:16.313019 2022781 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 03:30:16.341509 2022781 cri.go:76] found id: ""
	I0813 03:30:16.341598 2022781 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 03:30:16.348137 2022781 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 03:30:16.354334 2022781 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0813 03:30:16.354389 2022781 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 03:30:16.360245 2022781 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 03:30:16.360292 2022781 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0813 03:30:16.991998 2022781 out.go:204]   - Generating certificates and keys ...
	I0813 03:30:22.528739 2022781 out.go:204]   - Booting up control plane ...
	I0813 03:30:42.096737 2022781 out.go:204]   - Configuring RBAC rules ...
	I0813 03:30:42.513534 2022781 cni.go:93] Creating CNI manager for ""
	I0813 03:30:42.513560 2022781 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 03:30:42.515615 2022781 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 03:30:42.515681 2022781 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 03:30:42.519188 2022781 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0813 03:30:42.519210 2022781 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0813 03:30:42.531743 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 03:30:43.275208 2022781 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 03:30:43.275325 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:43.275388 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=dc1c3ca26e9449ce488a773126b8450402c94a19 minikube.k8s.io/name=addons-20210813032940-2022292 minikube.k8s.io/updated_at=2021_08_13T03_30_43_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:43.426738 2022781 ops.go:34] apiserver oom_adj: -16
	I0813 03:30:43.426841 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:44.011413 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:44.511783 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:45.011599 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:45.510878 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:46.011296 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:46.511321 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:47.010930 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:47.510919 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:48.011476 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:48.511258 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:49.010873 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:49.511635 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:50.010907 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:50.511782 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:51.011260 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:51.511532 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:52.011061 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:52.510863 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:53.010893 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:53.511752 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:54.011653 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:54.511235 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:55.011781 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:55.511793 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:56.011692 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:56.511006 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:56.652188 2022781 kubeadm.go:985] duration metric: took 13.376902139s to wait for elevateKubeSystemPrivileges.
	I0813 03:30:56.652210 2022781 kubeadm.go:392] StartCluster complete in 40.339416945s
	I0813 03:30:56.652225 2022781 settings.go:142] acquiring lock: {Name:mke0b9bf6059169e73bfde24fe8e8162c3ec0654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 03:30:56.652354 2022781 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 03:30:56.652762 2022781 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig: {Name:mk6797826f33680e9cda7cd38a7adfcabda9681c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 03:30:57.192592 2022781 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "addons-20210813032940-2022292" rescaled to 1
	I0813 03:30:57.192649 2022781 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 03:30:57.194497 2022781 out.go:177] * Verifying Kubernetes components...
	I0813 03:30:57.194581 2022781 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 03:30:57.192709 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 03:30:57.192937 2022781 addons.go:342] enableAddons start: toEnable=map[], additional=[registry metrics-server olm volumesnapshots csi-hostpath-driver ingress gcp-auth]
	I0813 03:30:57.194766 2022781 addons.go:59] Setting volumesnapshots=true in profile "addons-20210813032940-2022292"
	I0813 03:30:57.194794 2022781 addons.go:135] Setting addon volumesnapshots=true in "addons-20210813032940-2022292"
	I0813 03:30:57.194829 2022781 host.go:66] Checking if "addons-20210813032940-2022292" exists ...
	I0813 03:30:57.195355 2022781 cli_runner.go:115] Run: docker container inspect addons-20210813032940-2022292 --format={{.State.Status}}
	I0813 03:30:57.195502 2022781 addons.go:59] Setting ingress=true in profile "addons-20210813032940-2022292"
	I0813 03:30:57.195519 2022781 addons.go:135] Setting addon ingress=true in "addons-20210813032940-2022292"
	I0813 03:30:57.195539 2022781 host.go:66] Checking if "addons-20210813032940-2022292" exists ...
	I0813 03:30:57.195954 2022781 cli_runner.go:115] Run: docker container inspect addons-20210813032940-2022292 --format={{.State.Status}}
	I0813 03:30:57.196016 2022781 addons.go:59] Setting csi-hostpath-driver=true in profile "addons-20210813032940-2022292"
	I0813 03:30:57.196040 2022781 addons.go:135] Setting addon csi-hostpath-driver=true in "addons-20210813032940-2022292"
	I0813 03:30:57.196063 2022781 host.go:66] Checking if "addons-20210813032940-2022292" exists ...
	I0813 03:30:57.196472 2022781 cli_runner.go:115] Run: docker container inspect addons-20210813032940-2022292 --format={{.State.Status}}
	I0813 03:30:57.196530 2022781 addons.go:59] Setting default-storageclass=true in profile "addons-20210813032940-2022292"
	I0813 03:30:57.196541 2022781 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-20210813032940-2022292"
	I0813 03:30:57.196744 2022781 cli_runner.go:115] Run: docker container inspect addons-20210813032940-2022292 --format={{.State.Status}}
	I0813 03:30:57.196803 2022781 addons.go:59] Setting gcp-auth=true in profile "addons-20210813032940-2022292"
	I0813 03:30:57.196814 2022781 mustload.go:65] Loading cluster: addons-20210813032940-2022292
	I0813 03:30:57.197132 2022781 cli_runner.go:115] Run: docker container inspect addons-20210813032940-2022292 --format={{.State.Status}}
	I0813 03:30:57.197184 2022781 addons.go:59] Setting olm=true in profile "addons-20210813032940-2022292"
	I0813 03:30:57.197194 2022781 addons.go:135] Setting addon olm=true in "addons-20210813032940-2022292"
	I0813 03:30:57.197212 2022781 host.go:66] Checking if "addons-20210813032940-2022292" exists ...
	I0813 03:30:57.200564 2022781 cli_runner.go:115] Run: docker container inspect addons-20210813032940-2022292 --format={{.State.Status}}
	I0813 03:30:57.210302 2022781 addons.go:59] Setting metrics-server=true in profile "addons-20210813032940-2022292"
	I0813 03:30:57.210328 2022781 addons.go:135] Setting addon metrics-server=true in "addons-20210813032940-2022292"
	I0813 03:30:57.210364 2022781 host.go:66] Checking if "addons-20210813032940-2022292" exists ...
	I0813 03:30:57.210821 2022781 cli_runner.go:115] Run: docker container inspect addons-20210813032940-2022292 --format={{.State.Status}}
	I0813 03:30:57.210931 2022781 addons.go:59] Setting registry=true in profile "addons-20210813032940-2022292"
	I0813 03:30:57.210942 2022781 addons.go:135] Setting addon registry=true in "addons-20210813032940-2022292"
	I0813 03:30:57.210969 2022781 host.go:66] Checking if "addons-20210813032940-2022292" exists ...
	I0813 03:30:57.211436 2022781 cli_runner.go:115] Run: docker container inspect addons-20210813032940-2022292 --format={{.State.Status}}
	I0813 03:30:57.211498 2022781 addons.go:59] Setting storage-provisioner=true in profile "addons-20210813032940-2022292"
	I0813 03:30:57.211507 2022781 addons.go:135] Setting addon storage-provisioner=true in "addons-20210813032940-2022292"
	W0813 03:30:57.211512 2022781 addons.go:147] addon storage-provisioner should already be in state true
	I0813 03:30:57.211528 2022781 host.go:66] Checking if "addons-20210813032940-2022292" exists ...
	I0813 03:30:57.211909 2022781 cli_runner.go:115] Run: docker container inspect addons-20210813032940-2022292 --format={{.State.Status}}
	I0813 03:30:57.359893 2022781 out.go:177]   - Using image k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0
	I0813 03:30:57.359970 2022781 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0813 03:30:57.359983 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0813 03:30:57.360041 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:57.397371 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 03:30:57.398431 2022781 node_ready.go:35] waiting up to 6m0s for node "addons-20210813032940-2022292" to be "Ready" ...
	I0813 03:30:57.457671 2022781 out.go:177]   - Using image quay.io/operator-framework/olm:v0.17.0
	I0813 03:30:57.464684 2022781 out.go:177]   - Using image quay.io/operator-framework/upstream-community-operators:07bbc13
	I0813 03:30:57.571352 2022781 out.go:177]   - Using image k8s.gcr.io/metrics-server/metrics-server:v0.4.2
	I0813 03:30:57.571410 2022781 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 03:30:57.571426 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 03:30:57.571484 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:57.621898 2022781 out.go:177]   - Using image gcr.io/google_containers/kube-registry-proxy:0.4
	I0813 03:30:57.628181 2022781 out.go:177]   - Using image registry:2.7.1
	I0813 03:30:57.628300 2022781 addons.go:275] installing /etc/kubernetes/addons/registry-rc.yaml
	I0813 03:30:57.628309 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (788 bytes)
	I0813 03:30:57.628386 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:57.642700 2022781 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-attacher:v3.1.0
	I0813 03:30:57.645886 2022781 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-controller:v0.2.0
	I0813 03:30:57.671912 2022781 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
	I0813 03:30:57.675421 2022781 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0813 03:30:57.677167 2022781 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0813 03:30:57.677235 2022781 addons.go:275] installing /etc/kubernetes/addons/ingress-configmap.yaml
	I0813 03:30:57.677244 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-configmap.yaml (1865 bytes)
	I0813 03:30:57.677305 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:57.724233 2022781 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
	I0813 03:30:57.727436 2022781 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-agent:v0.2.0
	I0813 03:30:57.735580 2022781 out.go:177]   - Using image k8s.gcr.io/sig-storage/hostpathplugin:v1.6.0
	I0813 03:30:57.742927 2022781 out.go:177]   - Using image k8s.gcr.io/sig-storage/livenessprobe:v2.2.0
	I0813 03:30:57.739683 2022781 addons.go:135] Setting addon default-storageclass=true in "addons-20210813032940-2022292"
	I0813 03:30:57.739724 2022781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50803 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa Username:docker}
	I0813 03:30:57.739755 2022781 host.go:66] Checking if "addons-20210813032940-2022292" exists ...
	I0813 03:30:57.724752 2022781 addons.go:275] installing /etc/kubernetes/addons/crds.yaml
	I0813 03:30:57.744017 2022781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50803 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa Username:docker}
	W0813 03:30:57.745611 2022781 addons.go:147] addon default-storageclass should already be in state true
	I0813 03:30:57.745618 2022781 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
	I0813 03:30:57.760402 2022781 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
	I0813 03:30:57.756368 2022781 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 03:30:57.756381 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/crds.yaml (825331 bytes)
	I0813 03:30:57.756761 2022781 host.go:66] Checking if "addons-20210813032940-2022292" exists ...
	I0813 03:30:57.765577 2022781 cli_runner.go:115] Run: docker container inspect addons-20210813032940-2022292 --format={{.State.Status}}
	I0813 03:30:57.765794 2022781 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 03:30:57.765818 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 03:30:57.765883 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:57.765965 2022781 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-resizer:v1.1.0
	I0813 03:30:57.766037 2022781 addons.go:275] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0813 03:30:57.766058 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0813 03:30:57.766113 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:57.766218 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:57.818821 2022781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50803 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa Username:docker}
	I0813 03:30:57.852135 2022781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50803 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa Username:docker}
	I0813 03:30:57.889195 2022781 ssh_runner.go:316] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0813 03:30:57.889281 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:57.970915 2022781 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 03:30:57.970933 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 03:30:57.970985 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:57.987520 2022781 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 03:30:57.987538 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1931 bytes)
	I0813 03:30:58.038677 2022781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50803 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa Username:docker}
	I0813 03:30:58.056460 2022781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50803 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa Username:docker}
	I0813 03:30:58.071053 2022781 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0813 03:30:58.071076 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0813 03:30:58.078413 2022781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50803 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa Username:docker}
	I0813 03:30:58.124468 2022781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50803 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa Username:docker}
	I0813 03:30:58.130893 2022781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50803 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa Username:docker}
	I0813 03:30:58.143390 2022781 addons.go:275] installing /etc/kubernetes/addons/registry-svc.yaml
	I0813 03:30:58.143407 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0813 03:30:58.145018 2022781 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 03:30:58.145035 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 03:30:58.183598 2022781 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0813 03:30:58.183619 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0813 03:30:58.215426 2022781 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 03:30:58.215446 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 03:30:58.221069 2022781 addons.go:275] installing /etc/kubernetes/addons/ingress-rbac.yaml
	I0813 03:30:58.221112 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-rbac.yaml (6005 bytes)
	I0813 03:30:58.238980 2022781 addons.go:275] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0813 03:30:58.239028 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (950 bytes)
	I0813 03:30:58.243449 2022781 addons.go:275] installing /etc/kubernetes/addons/ingress-dp.yaml
	I0813 03:30:58.243489 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-dp.yaml (9394 bytes)
	I0813 03:30:58.304607 2022781 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml
	I0813 03:30:58.310261 2022781 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0813 03:30:58.348483 2022781 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml
	I0813 03:30:58.348534 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml (2203 bytes)
	I0813 03:30:58.368606 2022781 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0813 03:30:58.368664 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19584 bytes)
	I0813 03:30:58.373631 2022781 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 03:30:58.402890 2022781 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0813 03:30:58.402938 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3037 bytes)
	I0813 03:30:58.441222 2022781 addons.go:275] installing /etc/kubernetes/addons/olm.yaml
	I0813 03:30:58.441281 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/olm.yaml (9882 bytes)
	I0813 03:30:58.446337 2022781 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 03:30:58.505463 2022781 addons.go:275] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0813 03:30:58.505525 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3428 bytes)
	I0813 03:30:58.556304 2022781 ssh_runner.go:316] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0813 03:30:58.577016 2022781 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.179580348s)
	I0813 03:30:58.577078 2022781 start.go:736] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0813 03:30:58.578408 2022781 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 03:30:58.608000 2022781 addons.go:275] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0813 03:30:58.608059 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (3666 bytes)
	I0813 03:30:58.663310 2022781 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0813 03:30:58.753249 2022781 addons.go:275] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0813 03:30:58.753272 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1071 bytes)
	I0813 03:30:58.754118 2022781 addons.go:275] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0813 03:30:58.754135 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2944 bytes)
	I0813 03:30:58.758582 2022781 addons.go:135] Setting addon gcp-auth=true in "addons-20210813032940-2022292"
	I0813 03:30:58.758626 2022781 host.go:66] Checking if "addons-20210813032940-2022292" exists ...
	I0813 03:30:58.759106 2022781 cli_runner.go:115] Run: docker container inspect addons-20210813032940-2022292 --format={{.State.Status}}
	I0813 03:30:58.821220 2022781 out.go:177]   - Using image jettech/kube-webhook-certgen:v1.3.0
	I0813 03:30:58.823073 2022781 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.0.6
	I0813 03:30:58.823123 2022781 addons.go:275] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0813 03:30:58.823138 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0813 03:30:58.823192 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:58.886665 2022781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50803 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa Username:docker}
	I0813 03:30:58.922146 2022781 addons.go:275] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0813 03:30:58.922166 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3194 bytes)
	I0813 03:30:58.946902 2022781 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0813 03:30:58.975186 2022781 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0813 03:30:58.975210 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2421 bytes)
	I0813 03:30:58.994077 2022781 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0813 03:30:58.994096 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1034 bytes)
	I0813 03:30:59.121853 2022781 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0813 03:30:59.121918 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (6710 bytes)
	I0813 03:30:59.311772 2022781 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-provisioner.yaml
	I0813 03:30:59.311791 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-provisioner.yaml (2555 bytes)
	I0813 03:30:59.407661 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:30:59.447040 2022781 addons.go:275] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0813 03:30:59.447104 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (770 bytes)
	I0813 03:30:59.497092 2022781 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0813 03:30:59.497156 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2469 bytes)
	I0813 03:30:59.560580 2022781 addons.go:275] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0813 03:30:59.560641 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (4755 bytes)
	I0813 03:30:59.624771 2022781 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml
	I0813 03:30:59.624832 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml (2555 bytes)
	I0813 03:30:59.714273 2022781 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0813 03:30:59.850024 2022781 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0813 03:30:59.850092 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0813 03:30:59.940480 2022781 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0813 03:31:01.011039 2022781 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (2.700714405s)
	I0813 03:31:01.011064 2022781 addons.go:313] Verifying addon registry=true in "addons-20210813032940-2022292"
	I0813 03:31:01.013288 2022781 out.go:177] * Verifying registry addon...
	I0813 03:31:01.014895 2022781 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0813 03:31:01.011401 2022781 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.637715134s)
	I0813 03:31:01.015041 2022781 addons.go:313] Verifying addon metrics-server=true in "addons-20210813032940-2022292"
	I0813 03:31:01.011418 2022781 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml: (2.70679067s)
	I0813 03:31:01.015053 2022781 addons.go:313] Verifying addon ingress=true in "addons-20210813032940-2022292"
	I0813 03:31:01.011558 2022781 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.565182254s)
	I0813 03:31:01.011669 2022781 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.433133243s)
	I0813 03:31:01.017502 2022781 out.go:177] * Verifying ingress addon...
	I0813 03:31:01.019575 2022781 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0813 03:31:01.054527 2022781 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0813 03:31:01.054568 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:01.075891 2022781 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0813 03:31:01.075946 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:01.441335 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:01.605451 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:01.606015 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:02.073277 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:02.097062 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:02.608493 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:02.684632 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:03.124082 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:03.130892 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:03.449076 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:03.559480 2022781 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.612543123s)
	W0813 03:31:03.559512 2022781 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0813 03:31:03.559537 2022781 retry.go:31] will retry after 360.127272ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0813 03:31:03.559557 2022781 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (4.896220344s)
	W0813 03:31:03.559572 2022781 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0813 03:31:03.559579 2022781 retry.go:31] will retry after 291.140013ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0813 03:31:03.559643 2022781 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (3.845306221s)
	I0813 03:31:03.559655 2022781 addons.go:313] Verifying addon gcp-auth=true in "addons-20210813032940-2022292"
	I0813 03:31:03.563647 2022781 out.go:177] * Verifying gcp-auth addon...
	I0813 03:31:03.565513 2022781 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0813 03:31:03.658489 2022781 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0813 03:31:03.658547 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:03.659098 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:03.679372 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:03.850862 2022781 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0813 03:31:03.920365 2022781 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0813 03:31:04.119619 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:04.140362 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:04.167248 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:04.576001 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:04.585046 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:04.682050 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:04.995932 2022781 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.05536482s)
	I0813 03:31:04.996001 2022781 addons.go:313] Verifying addon csi-hostpath-driver=true in "addons-20210813032940-2022292"
	I0813 03:31:04.999927 2022781 out.go:177] * Verifying csi-hostpath-driver addon...
	I0813 03:31:05.001725 2022781 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0813 03:31:05.051965 2022781 kapi.go:86] Found 5 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0813 03:31:05.052027 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:05.069278 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:05.101185 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:05.260498 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:05.490392 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:05.589788 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:05.591256 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:05.595113 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:05.661721 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:06.068823 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:06.075971 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:06.080721 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:06.092778 2022781 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (2.241844201s)
	I0813 03:31:06.092890 2022781 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.172455914s)
	I0813 03:31:06.166316 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:06.559342 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:06.560990 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:06.578867 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:06.661958 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:07.056869 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:07.059136 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:07.078935 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:07.162308 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:07.558175 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:07.558364 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:07.579127 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:07.661829 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:07.908271 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:08.057749 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:08.060739 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:08.088631 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:08.166573 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:08.557935 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:08.560943 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:08.586563 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:08.661083 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:09.071552 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:09.071943 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:09.078283 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:09.161747 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:09.558910 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:09.560846 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:09.579424 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:09.661747 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:10.058388 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:10.059019 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:10.079633 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:10.161876 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:10.407358 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:10.556948 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:10.558279 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:10.578788 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:10.661635 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:11.057295 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:11.059187 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:11.079035 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:11.161068 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:11.557584 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:11.558891 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:11.578602 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:11.661347 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:12.057052 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:12.058710 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:12.079397 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:12.161365 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:12.407663 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:12.557331 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:12.558836 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:12.579555 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:12.661269 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:13.058425 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:13.058819 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:13.079522 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:13.161225 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:13.557182 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:13.559122 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:13.578539 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:13.662249 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:14.056888 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:14.058979 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:14.079485 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:14.161653 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:14.557930 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:14.559013 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:14.578707 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:14.662203 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:14.908057 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:15.058555 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:15.058960 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:15.079677 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:15.161960 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:15.556797 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:15.558599 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:15.579189 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:15.665520 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:16.058280 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:16.059533 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:16.078991 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:16.161336 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:16.557900 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:16.559209 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:16.578911 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:16.662278 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:16.908218 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:17.058976 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:17.062797 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:17.078598 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:17.161084 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:17.556309 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:17.558779 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:17.579316 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:17.661159 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:18.056254 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:18.058097 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:18.078601 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:18.161428 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:18.557023 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:18.558366 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:18.578650 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:18.660860 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:19.056829 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:19.057601 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:19.079058 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:19.160810 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:19.406830 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:19.557137 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:19.570593 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:19.578919 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:19.661198 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:20.056580 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:20.058011 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:20.078534 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:20.160728 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:20.556698 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:20.558044 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:20.578573 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:20.661305 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:21.055749 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:21.057234 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:21.078640 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:21.162419 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:21.407789 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:21.556491 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:21.558398 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:21.578920 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:21.661739 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:22.056835 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:22.059358 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:22.078758 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:22.161223 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:22.557289 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:22.558276 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:22.578866 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:22.661269 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:23.056957 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:23.058542 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:23.079179 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:23.160926 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:23.556244 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:23.565480 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:23.579030 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:23.661406 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:23.907689 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:24.056686 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:24.058284 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:24.078803 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:24.161543 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:24.557645 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:24.559094 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:24.578505 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:24.661225 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:25.056915 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:25.058921 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:25.079371 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:25.161221 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:25.558548 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:25.560666 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:25.578985 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:25.661314 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:25.908555 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:26.057813 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:26.060606 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:26.079261 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:26.162092 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:26.559363 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:26.563308 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:26.579390 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:26.662424 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:27.057172 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:27.058133 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:27.078618 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:27.160769 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:27.557054 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:27.558322 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:27.578892 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:27.661134 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:28.056649 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:28.058426 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:28.078843 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:28.161113 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:28.407578 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:28.556884 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:28.558849 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:28.579213 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:28.661872 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:29.062391 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:29.064747 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:29.079349 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:29.161744 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:29.559485 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:29.559615 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:29.588437 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:29.661647 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:30.176028 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:30.178278 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:30.178550 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:30.179384 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:30.557737 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:30.559734 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:30.579399 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:30.661487 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:30.908526 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:31.058560 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:31.058776 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:31.079841 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:31.162063 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:31.557377 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:31.559871 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:31.579905 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:31.662325 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:32.058012 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:32.059445 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:32.079203 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:32.162061 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:32.556579 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:32.558342 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:32.579193 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:32.661445 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:33.056971 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:33.062490 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:33.079405 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:33.161773 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:33.407515 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:33.557161 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:33.559301 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:33.578865 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:33.662094 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:34.058311 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:34.063650 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:34.079156 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:34.161208 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:34.556999 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:34.559078 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:34.578785 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:34.661861 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:35.057159 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:35.058229 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:35.078820 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:35.161890 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:35.407603 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:35.556952 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:35.559040 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:35.579535 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:35.661201 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:36.057747 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:36.059208 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:36.078661 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:36.161392 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:36.558465 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:36.559070 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:36.578566 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:36.661888 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:37.057392 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:37.060584 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:37.079304 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:37.161650 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:37.564374 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:37.565926 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:37.579818 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:37.661622 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:37.907609 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:38.058278 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:38.058920 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:38.079175 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:38.161440 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:38.557591 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:38.559313 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:38.579040 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:38.661840 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:39.056666 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:39.058149 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:39.078850 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:39.161258 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:39.556921 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:39.558888 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:39.579613 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:39.661796 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:40.057861 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:40.059831 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:40.079270 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:40.161703 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:40.407456 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:40.556897 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:40.558705 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:40.579932 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:40.661833 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:41.056472 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:41.059744 2022781 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0813 03:31:41.059763 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:41.079609 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:41.161642 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:41.407679 2022781 node_ready.go:49] node "addons-20210813032940-2022292" has status "Ready":"True"
	I0813 03:31:41.407707 2022781 node_ready.go:38] duration metric: took 44.009250418s waiting for node "addons-20210813032940-2022292" to be "Ready" ...
	I0813 03:31:41.407716 2022781 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 03:31:41.415156 2022781 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-69x4l" in "kube-system" namespace to be "Ready" ...
	I0813 03:31:41.558078 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:41.560565 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:41.579199 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:41.661913 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:42.056679 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:42.059509 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:42.079025 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:42.161532 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:42.556679 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:42.559194 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:42.579531 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:42.660981 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:43.057202 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:43.059371 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:43.079006 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:43.161759 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:43.434237 2022781 pod_ready.go:102] pod "coredns-558bd4d5db-69x4l" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-13 03:30:56 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0813 03:31:43.558666 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:43.558977 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:43.579098 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:43.662316 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:44.056791 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:44.058613 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:44.079594 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:44.161076 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:44.556840 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:44.559275 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:44.578741 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:44.661566 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:45.057098 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:45.058829 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:45.079389 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:45.161883 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:45.436262 2022781 pod_ready.go:102] pod "coredns-558bd4d5db-69x4l" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-13 03:30:56 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0813 03:31:45.575150 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:45.584294 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:45.585026 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:45.661603 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:46.057182 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:46.059769 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:46.079195 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:46.162319 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:46.556459 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:46.559487 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:46.580902 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:46.661798 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:47.057717 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:47.059285 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:47.079021 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:47.161669 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:47.438260 2022781 pod_ready.go:102] pod "coredns-558bd4d5db-69x4l" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-13 03:30:56 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0813 03:31:47.559550 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:47.559922 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:47.579440 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:47.661740 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:48.057821 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:48.061063 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:48.079047 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:48.161877 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:48.559849 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:48.560623 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:48.579150 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:48.662015 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:49.056892 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:49.058463 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:49.078937 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:49.163554 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:49.557752 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:49.564778 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:49.579620 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:49.661737 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:49.933528 2022781 pod_ready.go:102] pod "coredns-558bd4d5db-69x4l" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-13 03:30:56 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0813 03:31:50.061568 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:50.062915 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:50.078976 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:50.161696 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:50.557144 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:50.559099 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:50.578749 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:50.661769 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:51.058325 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:51.060796 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:51.082729 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:51.165438 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:51.558033 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:51.560059 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:51.578642 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:51.661962 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:51.937811 2022781 pod_ready.go:102] pod "coredns-558bd4d5db-69x4l" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-13 03:31:51 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0813 03:31:52.058037 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:52.065290 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:52.082306 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:52.162234 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:52.561858 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:52.563177 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:52.579176 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:52.662893 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:53.058836 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:53.059610 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:53.079567 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:53.161708 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:53.556763 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:53.560389 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:53.580992 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:53.668937 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:54.066272 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:54.066848 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:54.083319 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:54.162629 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:54.436037 2022781 pod_ready.go:102] pod "coredns-558bd4d5db-69x4l" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-13 03:31:51 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0813 03:31:54.556924 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:54.559716 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:54.579214 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:54.662311 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:55.059249 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:55.061042 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:55.078970 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:55.161897 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:55.557313 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:55.559641 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:55.579729 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:55.661895 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:56.059000 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:56.062237 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:56.079500 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:56.161665 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:56.436902 2022781 pod_ready.go:102] pod "coredns-558bd4d5db-69x4l" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-13 03:31:51 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0813 03:31:56.566261 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:56.566676 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:56.578767 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:56.661644 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:57.057090 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:57.059556 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:57.079502 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:57.161359 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:57.557584 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:57.559448 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:57.579110 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:57.661572 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:58.057085 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:58.058884 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:58.079179 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:58.161220 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:58.538956 2022781 pod_ready.go:102] pod "coredns-558bd4d5db-69x4l" in "kube-system" namespace has status "Ready":"False"
	I0813 03:31:58.557350 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:58.559691 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:58.579943 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:58.791628 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:59.059276 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:59.061564 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:59.079752 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:59.161666 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:59.435689 2022781 pod_ready.go:92] pod "coredns-558bd4d5db-69x4l" in "kube-system" namespace has status "Ready":"True"
	I0813 03:31:59.435753 2022781 pod_ready.go:81] duration metric: took 18.020568142s waiting for pod "coredns-558bd4d5db-69x4l" in "kube-system" namespace to be "Ready" ...
	I0813 03:31:59.435791 2022781 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-20210813032940-2022292" in "kube-system" namespace to be "Ready" ...
	I0813 03:31:59.441594 2022781 pod_ready.go:92] pod "etcd-addons-20210813032940-2022292" in "kube-system" namespace has status "Ready":"True"
	I0813 03:31:59.441612 2022781 pod_ready.go:81] duration metric: took 5.786951ms waiting for pod "etcd-addons-20210813032940-2022292" in "kube-system" namespace to be "Ready" ...
	I0813 03:31:59.441623 2022781 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-20210813032940-2022292" in "kube-system" namespace to be "Ready" ...
	I0813 03:31:59.445277 2022781 pod_ready.go:92] pod "kube-apiserver-addons-20210813032940-2022292" in "kube-system" namespace has status "Ready":"True"
	I0813 03:31:59.445293 2022781 pod_ready.go:81] duration metric: took 3.644132ms waiting for pod "kube-apiserver-addons-20210813032940-2022292" in "kube-system" namespace to be "Ready" ...
	I0813 03:31:59.445302 2022781 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-20210813032940-2022292" in "kube-system" namespace to be "Ready" ...
	I0813 03:31:59.449072 2022781 pod_ready.go:92] pod "kube-controller-manager-addons-20210813032940-2022292" in "kube-system" namespace has status "Ready":"True"
	I0813 03:31:59.449092 2022781 pod_ready.go:81] duration metric: took 3.769744ms waiting for pod "kube-controller-manager-addons-20210813032940-2022292" in "kube-system" namespace to be "Ready" ...
	I0813 03:31:59.449101 2022781 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9knsw" in "kube-system" namespace to be "Ready" ...
	I0813 03:31:59.452876 2022781 pod_ready.go:92] pod "kube-proxy-9knsw" in "kube-system" namespace has status "Ready":"True"
	I0813 03:31:59.452894 2022781 pod_ready.go:81] duration metric: took 3.786088ms waiting for pod "kube-proxy-9knsw" in "kube-system" namespace to be "Ready" ...
	I0813 03:31:59.452902 2022781 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-20210813032940-2022292" in "kube-system" namespace to be "Ready" ...
	I0813 03:31:59.557933 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:59.559887 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:59.579750 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:59.661390 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:59.833698 2022781 pod_ready.go:92] pod "kube-scheduler-addons-20210813032940-2022292" in "kube-system" namespace has status "Ready":"True"
	I0813 03:31:59.833721 2022781 pod_ready.go:81] duration metric: took 380.809632ms waiting for pod "kube-scheduler-addons-20210813032940-2022292" in "kube-system" namespace to be "Ready" ...
	I0813 03:31:59.833732 2022781 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-77c99ccb96-vn6tn" in "kube-system" namespace to be "Ready" ...
	I0813 03:32:00.057507 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:00.060450 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:00.094038 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:00.162156 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:00.559531 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:00.560807 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:00.581504 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:00.662211 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:01.059861 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:01.066023 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:01.079504 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:01.161767 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:01.559334 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:01.561759 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:01.579832 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:01.662353 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:02.056935 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:02.059383 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:02.079185 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:02.161090 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:02.240408 2022781 pod_ready.go:102] pod "metrics-server-77c99ccb96-vn6tn" in "kube-system" namespace has status "Ready":"False"
	I0813 03:32:02.561477 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:02.562142 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:02.579306 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:02.662331 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:03.059012 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:03.059508 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:03.079655 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:03.162089 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:03.559560 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:03.561402 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:03.579678 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:03.661723 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:04.059263 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:04.059783 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:04.078943 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:04.162284 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:04.241030 2022781 pod_ready.go:102] pod "metrics-server-77c99ccb96-vn6tn" in "kube-system" namespace has status "Ready":"False"
	I0813 03:32:04.557153 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:04.560141 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:04.579728 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:04.662513 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:05.057080 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:05.059685 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:05.079032 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:05.162737 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:05.557144 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:05.559484 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:05.579579 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:05.661242 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:06.057042 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:06.059442 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:06.079396 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:06.162304 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:06.557204 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:06.559494 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:06.579157 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:06.661762 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:06.740372 2022781 pod_ready.go:102] pod "metrics-server-77c99ccb96-vn6tn" in "kube-system" namespace has status "Ready":"False"
	I0813 03:32:07.057455 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:07.059940 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:07.079231 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:07.161987 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:07.559284 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:07.559814 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:07.579364 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:07.662813 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:08.073362 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:08.079626 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:08.084214 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:08.163307 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:08.559617 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:08.560235 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:08.579114 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:08.662066 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:08.741784 2022781 pod_ready.go:102] pod "metrics-server-77c99ccb96-vn6tn" in "kube-system" namespace has status "Ready":"False"
	I0813 03:32:09.056830 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:09.059188 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:09.088691 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:09.162660 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:09.562253 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:09.563021 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:09.579227 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:09.661934 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:10.059459 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:10.063371 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:10.079184 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:10.175815 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:10.593011 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:10.593586 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:10.594885 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:10.661226 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:11.058162 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:11.058643 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:11.079722 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:11.161708 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:11.240182 2022781 pod_ready.go:102] pod "metrics-server-77c99ccb96-vn6tn" in "kube-system" namespace has status "Ready":"False"
	I0813 03:32:11.558205 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:11.560235 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:11.586837 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:11.661385 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:11.739491 2022781 pod_ready.go:92] pod "metrics-server-77c99ccb96-vn6tn" in "kube-system" namespace has status "Ready":"True"
	I0813 03:32:11.739550 2022781 pod_ready.go:81] duration metric: took 11.905794872s waiting for pod "metrics-server-77c99ccb96-vn6tn" in "kube-system" namespace to be "Ready" ...
	I0813 03:32:11.739581 2022781 pod_ready.go:38] duration metric: took 30.331840863s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 03:32:11.739626 2022781 api_server.go:50] waiting for apiserver process to appear ...
	I0813 03:32:11.739656 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0813 03:32:11.739748 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0813 03:32:11.812172 2022781 cri.go:76] found id: "e34ccd127601990a58a3b0bf970ea62cbd1f776d1e31437a7826d21db94646c3"
	I0813 03:32:11.812187 2022781 cri.go:76] found id: ""
	I0813 03:32:11.812193 2022781 logs.go:270] 1 containers: [e34ccd127601990a58a3b0bf970ea62cbd1f776d1e31437a7826d21db94646c3]
	I0813 03:32:11.812263 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:11.814780 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0813 03:32:11.814825 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0813 03:32:11.843019 2022781 cri.go:76] found id: "3c1ce4b5f6d51d1115ca194a201f0f2319ae8953a147cd7517cafbb1fa677e11"
	I0813 03:32:11.843063 2022781 cri.go:76] found id: ""
	I0813 03:32:11.843075 2022781 logs.go:270] 1 containers: [3c1ce4b5f6d51d1115ca194a201f0f2319ae8953a147cd7517cafbb1fa677e11]
	I0813 03:32:11.843111 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:11.845551 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0813 03:32:11.845596 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0813 03:32:11.867609 2022781 cri.go:76] found id: "76df34c67e4d8322520aef9bf10ede58a153c61504ac00600249a57066b783fd"
	I0813 03:32:11.867624 2022781 cri.go:76] found id: ""
	I0813 03:32:11.867630 2022781 logs.go:270] 1 containers: [76df34c67e4d8322520aef9bf10ede58a153c61504ac00600249a57066b783fd]
	I0813 03:32:11.867685 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:11.870294 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0813 03:32:11.870336 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0813 03:32:11.894045 2022781 cri.go:76] found id: "ecb0d384c34ed158fd62ade4f570a1ed8cfaca3ee6c53ab12a2de2f06720576b"
	I0813 03:32:11.894086 2022781 cri.go:76] found id: ""
	I0813 03:32:11.894097 2022781 logs.go:270] 1 containers: [ecb0d384c34ed158fd62ade4f570a1ed8cfaca3ee6c53ab12a2de2f06720576b]
	I0813 03:32:11.894130 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:11.896655 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0813 03:32:11.896697 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0813 03:32:11.925307 2022781 cri.go:76] found id: "b57e0dbb56f139a4e11ed434de0567755dd4ffdb08231f247c678c04a27d72ea"
	I0813 03:32:11.925323 2022781 cri.go:76] found id: ""
	I0813 03:32:11.925330 2022781 logs.go:270] 1 containers: [b57e0dbb56f139a4e11ed434de0567755dd4ffdb08231f247c678c04a27d72ea]
	I0813 03:32:11.925365 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:11.927899 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0813 03:32:11.927939 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0813 03:32:11.949495 2022781 cri.go:76] found id: ""
	I0813 03:32:11.949534 2022781 logs.go:270] 0 containers: []
	W0813 03:32:11.949546 2022781 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0813 03:32:11.949552 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0813 03:32:11.949587 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0813 03:32:11.971549 2022781 cri.go:76] found id: "f251119960206b400f3c5fa69a1475ae4017f1585aedfcae1ba3a5e41c29eaca"
	I0813 03:32:11.971566 2022781 cri.go:76] found id: ""
	I0813 03:32:11.971571 2022781 logs.go:270] 1 containers: [f251119960206b400f3c5fa69a1475ae4017f1585aedfcae1ba3a5e41c29eaca]
	I0813 03:32:11.971620 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:11.974054 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0813 03:32:11.974095 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0813 03:32:12.003331 2022781 cri.go:76] found id: "fb47330aab572a8f0160ba80c3c0231858b45b60f6be68e94341c50ea5e9a377"
	I0813 03:32:12.003378 2022781 cri.go:76] found id: ""
	I0813 03:32:12.003394 2022781 logs.go:270] 1 containers: [fb47330aab572a8f0160ba80c3c0231858b45b60f6be68e94341c50ea5e9a377]
	I0813 03:32:12.003459 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:12.006140 2022781 logs.go:123] Gathering logs for kube-scheduler [ecb0d384c34ed158fd62ade4f570a1ed8cfaca3ee6c53ab12a2de2f06720576b] ...
	I0813 03:32:12.006155 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecb0d384c34ed158fd62ade4f570a1ed8cfaca3ee6c53ab12a2de2f06720576b"
	I0813 03:32:12.033019 2022781 logs.go:123] Gathering logs for containerd ...
	I0813 03:32:12.033040 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0813 03:32:12.068283 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:12.068688 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:12.079690 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:12.118724 2022781 logs.go:123] Gathering logs for kubelet ...
	I0813 03:32:12.118744 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 03:32:12.161825 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0813 03:32:12.177299 2022781 logs.go:138] Found kubelet problem: Aug 13 03:30:56 addons-20210813032940-2022292 kubelet[1185]: E0813 03:30:56.615910    1185 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210813032940-2022292" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-20210813032940-2022292' and this object
	W0813 03:32:12.177544 2022781 logs.go:138] Found kubelet problem: Aug 13 03:30:56 addons-20210813032940-2022292 kubelet[1185]: E0813 03:30:56.615970    1185 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-20210813032940-2022292" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-20210813032940-2022292' and this object
	I0813 03:32:12.214534 2022781 logs.go:123] Gathering logs for dmesg ...
	I0813 03:32:12.214555 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 03:32:12.230395 2022781 logs.go:123] Gathering logs for etcd [3c1ce4b5f6d51d1115ca194a201f0f2319ae8953a147cd7517cafbb1fa677e11] ...
	I0813 03:32:12.230412 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c1ce4b5f6d51d1115ca194a201f0f2319ae8953a147cd7517cafbb1fa677e11"
	I0813 03:32:12.255618 2022781 logs.go:123] Gathering logs for coredns [76df34c67e4d8322520aef9bf10ede58a153c61504ac00600249a57066b783fd] ...
	I0813 03:32:12.255638 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76df34c67e4d8322520aef9bf10ede58a153c61504ac00600249a57066b783fd"
	I0813 03:32:12.276571 2022781 logs.go:123] Gathering logs for kube-proxy [b57e0dbb56f139a4e11ed434de0567755dd4ffdb08231f247c678c04a27d72ea] ...
	I0813 03:32:12.276592 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b57e0dbb56f139a4e11ed434de0567755dd4ffdb08231f247c678c04a27d72ea"
	I0813 03:32:12.300947 2022781 logs.go:123] Gathering logs for storage-provisioner [f251119960206b400f3c5fa69a1475ae4017f1585aedfcae1ba3a5e41c29eaca] ...
	I0813 03:32:12.300966 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f251119960206b400f3c5fa69a1475ae4017f1585aedfcae1ba3a5e41c29eaca"
	I0813 03:32:12.323282 2022781 logs.go:123] Gathering logs for kube-controller-manager [fb47330aab572a8f0160ba80c3c0231858b45b60f6be68e94341c50ea5e9a377] ...
	I0813 03:32:12.323301 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb47330aab572a8f0160ba80c3c0231858b45b60f6be68e94341c50ea5e9a377"
	I0813 03:32:12.375068 2022781 logs.go:123] Gathering logs for container status ...
	I0813 03:32:12.375093 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 03:32:12.401272 2022781 logs.go:123] Gathering logs for describe nodes ...
	I0813 03:32:12.401291 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 03:32:12.548904 2022781 logs.go:123] Gathering logs for kube-apiserver [e34ccd127601990a58a3b0bf970ea62cbd1f776d1e31437a7826d21db94646c3] ...
	I0813 03:32:12.548930 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e34ccd127601990a58a3b0bf970ea62cbd1f776d1e31437a7826d21db94646c3"
	I0813 03:32:12.559375 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:12.562030 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:12.579464 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:12.659062 2022781 out.go:311] Setting ErrFile to fd 2...
	I0813 03:32:12.659085 2022781 out.go:345] TERM=,COLORTERM=, which probably does not support color
	W0813 03:32:12.659194 2022781 out.go:242] X Problems detected in kubelet:
	W0813 03:32:12.659204 2022781 out.go:242]   Aug 13 03:30:56 addons-20210813032940-2022292 kubelet[1185]: E0813 03:30:56.615910    1185 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210813032940-2022292" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-20210813032940-2022292' and this object
	W0813 03:32:12.659211 2022781 out.go:242]   Aug 13 03:30:56 addons-20210813032940-2022292 kubelet[1185]: E0813 03:30:56.615970    1185 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-20210813032940-2022292" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-20210813032940-2022292' and this object
	I0813 03:32:12.659217 2022781 out.go:311] Setting ErrFile to fd 2...
	I0813 03:32:12.659225 2022781 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 03:32:12.673862 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:13.059325 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:13.062776 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:13.087927 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:13.162237 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:13.560080 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:13.565340 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:13.579524 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:13.661442 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:14.060788 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:14.062591 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:14.079441 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:14.161105 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:14.556863 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:14.558839 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:14.579411 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:14.661226 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:15.058603 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:15.060542 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:15.080085 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:15.162195 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:15.558043 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:15.559656 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:15.580190 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:15.665749 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:16.057055 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:16.059113 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:16.079769 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:16.161497 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:16.559200 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:16.560087 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:16.579835 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:16.662097 2022781 kapi.go:108] duration metric: took 1m13.096582398s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0813 03:32:16.664068 2022781 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-20210813032940-2022292 cluster.
	I0813 03:32:16.670830 2022781 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0813 03:32:16.676788 2022781 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0813 03:32:17.057899 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:17.059706 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:17.079747 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:17.558152 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:17.560601 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:17.579004 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:18.061308 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:18.062057 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:18.079880 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:18.558150 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:18.560757 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:18.579782 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:19.056963 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:19.059210 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:19.078839 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:19.557242 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:19.559678 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:19.579861 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:20.056923 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:20.060320 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:20.078955 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:20.560350 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:20.561208 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:20.579305 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:21.067467 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:21.067845 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:21.080703 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:21.556957 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:21.564057 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:21.579253 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:22.062724 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:22.065424 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:22.080480 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:22.558437 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:22.563395 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:22.580348 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:22.659953 2022781 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 03:32:22.694110 2022781 api_server.go:70] duration metric: took 1m25.501422314s to wait for apiserver process to appear ...
	I0813 03:32:22.694173 2022781 api_server.go:86] waiting for apiserver healthz status ...
	I0813 03:32:22.694205 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0813 03:32:22.694282 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0813 03:32:22.732785 2022781 cri.go:76] found id: "e34ccd127601990a58a3b0bf970ea62cbd1f776d1e31437a7826d21db94646c3"
	I0813 03:32:22.732842 2022781 cri.go:76] found id: ""
	I0813 03:32:22.732861 2022781 logs.go:270] 1 containers: [e34ccd127601990a58a3b0bf970ea62cbd1f776d1e31437a7826d21db94646c3]
	I0813 03:32:22.732936 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:22.736230 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0813 03:32:22.736312 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0813 03:32:22.765159 2022781 cri.go:76] found id: "3c1ce4b5f6d51d1115ca194a201f0f2319ae8953a147cd7517cafbb1fa677e11"
	I0813 03:32:22.765209 2022781 cri.go:76] found id: ""
	I0813 03:32:22.765228 2022781 logs.go:270] 1 containers: [3c1ce4b5f6d51d1115ca194a201f0f2319ae8953a147cd7517cafbb1fa677e11]
	I0813 03:32:22.765308 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:22.768400 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0813 03:32:22.768493 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0813 03:32:22.825296 2022781 cri.go:76] found id: "76df34c67e4d8322520aef9bf10ede58a153c61504ac00600249a57066b783fd"
	I0813 03:32:22.825357 2022781 cri.go:76] found id: ""
	I0813 03:32:22.825375 2022781 logs.go:270] 1 containers: [76df34c67e4d8322520aef9bf10ede58a153c61504ac00600249a57066b783fd]
	I0813 03:32:22.825450 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:22.829195 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0813 03:32:22.829285 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0813 03:32:22.881181 2022781 cri.go:76] found id: "ecb0d384c34ed158fd62ade4f570a1ed8cfaca3ee6c53ab12a2de2f06720576b"
	I0813 03:32:22.881242 2022781 cri.go:76] found id: ""
	I0813 03:32:22.881259 2022781 logs.go:270] 1 containers: [ecb0d384c34ed158fd62ade4f570a1ed8cfaca3ee6c53ab12a2de2f06720576b]
	I0813 03:32:22.881329 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:22.885820 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0813 03:32:22.885908 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0813 03:32:22.982661 2022781 cri.go:76] found id: "b57e0dbb56f139a4e11ed434de0567755dd4ffdb08231f247c678c04a27d72ea"
	I0813 03:32:22.982714 2022781 cri.go:76] found id: ""
	I0813 03:32:22.982741 2022781 logs.go:270] 1 containers: [b57e0dbb56f139a4e11ed434de0567755dd4ffdb08231f247c678c04a27d72ea]
	I0813 03:32:22.982813 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:22.986328 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0813 03:32:22.986378 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0813 03:32:23.027000 2022781 cri.go:76] found id: ""
	I0813 03:32:23.027018 2022781 logs.go:270] 0 containers: []
	W0813 03:32:23.027025 2022781 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0813 03:32:23.027032 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0813 03:32:23.027083 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0813 03:32:23.060139 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:23.061278 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:23.069559 2022781 cri.go:76] found id: "f251119960206b400f3c5fa69a1475ae4017f1585aedfcae1ba3a5e41c29eaca"
	I0813 03:32:23.069608 2022781 cri.go:76] found id: ""
	I0813 03:32:23.069627 2022781 logs.go:270] 1 containers: [f251119960206b400f3c5fa69a1475ae4017f1585aedfcae1ba3a5e41c29eaca]
	I0813 03:32:23.069694 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:23.074554 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0813 03:32:23.074645 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0813 03:32:23.080185 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:23.110198 2022781 cri.go:76] found id: "fb47330aab572a8f0160ba80c3c0231858b45b60f6be68e94341c50ea5e9a377"
	I0813 03:32:23.110256 2022781 cri.go:76] found id: ""
	I0813 03:32:23.110274 2022781 logs.go:270] 1 containers: [fb47330aab572a8f0160ba80c3c0231858b45b60f6be68e94341c50ea5e9a377]
	I0813 03:32:23.110353 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:23.114338 2022781 logs.go:123] Gathering logs for containerd ...
	I0813 03:32:23.114442 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0813 03:32:23.226451 2022781 logs.go:123] Gathering logs for dmesg ...
	I0813 03:32:23.226481 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 03:32:23.241683 2022781 logs.go:123] Gathering logs for describe nodes ...
	I0813 03:32:23.241711 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 03:32:23.499313 2022781 logs.go:123] Gathering logs for kube-apiserver [e34ccd127601990a58a3b0bf970ea62cbd1f776d1e31437a7826d21db94646c3] ...
	I0813 03:32:23.499341 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e34ccd127601990a58a3b0bf970ea62cbd1f776d1e31437a7826d21db94646c3"
	I0813 03:32:23.556000 2022781 logs.go:123] Gathering logs for kube-controller-manager [fb47330aab572a8f0160ba80c3c0231858b45b60f6be68e94341c50ea5e9a377] ...
	I0813 03:32:23.556029 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb47330aab572a8f0160ba80c3c0231858b45b60f6be68e94341c50ea5e9a377"
	I0813 03:32:23.561069 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:23.563716 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:23.580520 2022781 kapi.go:108] duration metric: took 1m22.560942925s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0813 03:32:23.620196 2022781 logs.go:123] Gathering logs for kube-proxy [b57e0dbb56f139a4e11ed434de0567755dd4ffdb08231f247c678c04a27d72ea] ...
	I0813 03:32:23.620226 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b57e0dbb56f139a4e11ed434de0567755dd4ffdb08231f247c678c04a27d72ea"
	I0813 03:32:23.649357 2022781 logs.go:123] Gathering logs for storage-provisioner [f251119960206b400f3c5fa69a1475ae4017f1585aedfcae1ba3a5e41c29eaca] ...
	I0813 03:32:23.649384 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f251119960206b400f3c5fa69a1475ae4017f1585aedfcae1ba3a5e41c29eaca"
	I0813 03:32:23.680023 2022781 logs.go:123] Gathering logs for container status ...
	I0813 03:32:23.680049 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 03:32:23.722823 2022781 logs.go:123] Gathering logs for kubelet ...
	I0813 03:32:23.722853 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0813 03:32:23.784859 2022781 logs.go:138] Found kubelet problem: Aug 13 03:30:56 addons-20210813032940-2022292 kubelet[1185]: E0813 03:30:56.615910    1185 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210813032940-2022292" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-20210813032940-2022292' and this object
	W0813 03:32:23.785152 2022781 logs.go:138] Found kubelet problem: Aug 13 03:30:56 addons-20210813032940-2022292 kubelet[1185]: E0813 03:30:56.615970    1185 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-20210813032940-2022292" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-20210813032940-2022292' and this object
	I0813 03:32:23.823338 2022781 logs.go:123] Gathering logs for etcd [3c1ce4b5f6d51d1115ca194a201f0f2319ae8953a147cd7517cafbb1fa677e11] ...
	I0813 03:32:23.823392 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c1ce4b5f6d51d1115ca194a201f0f2319ae8953a147cd7517cafbb1fa677e11"
	I0813 03:32:23.857325 2022781 logs.go:123] Gathering logs for coredns [76df34c67e4d8322520aef9bf10ede58a153c61504ac00600249a57066b783fd] ...
	I0813 03:32:23.857354 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76df34c67e4d8322520aef9bf10ede58a153c61504ac00600249a57066b783fd"
	I0813 03:32:23.888625 2022781 logs.go:123] Gathering logs for kube-scheduler [ecb0d384c34ed158fd62ade4f570a1ed8cfaca3ee6c53ab12a2de2f06720576b] ...
	I0813 03:32:23.888649 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecb0d384c34ed158fd62ade4f570a1ed8cfaca3ee6c53ab12a2de2f06720576b"
	I0813 03:32:23.943578 2022781 out.go:311] Setting ErrFile to fd 2...
	I0813 03:32:23.943633 2022781 out.go:345] TERM=,COLORTERM=, which probably does not support color
	W0813 03:32:23.943825 2022781 out.go:242] X Problems detected in kubelet:
	W0813 03:32:23.943838 2022781 out.go:242]   Aug 13 03:30:56 addons-20210813032940-2022292 kubelet[1185]: E0813 03:30:56.615910    1185 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210813032940-2022292" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-20210813032940-2022292' and this object
	W0813 03:32:23.943847 2022781 out.go:242]   Aug 13 03:30:56 addons-20210813032940-2022292 kubelet[1185]: E0813 03:30:56.615970    1185 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-20210813032940-2022292" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-20210813032940-2022292' and this object
	I0813 03:32:23.943858 2022781 out.go:311] Setting ErrFile to fd 2...
	I0813 03:32:23.943863 2022781 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 03:32:24.063676 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:24.065290 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:24.557303 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:24.559593 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:25.058802 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:25.059726 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:25.557197 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:25.559980 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:26.057045 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:26.059553 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:26.556742 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:26.559573 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:27.058797 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:27.065273 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:27.557423 2022781 kapi.go:108] duration metric: took 1m22.555695527s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0813 03:32:27.567926 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:28.058729 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:28.558602 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:29.058767 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:29.558686 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:30.059333 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:30.558577 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:31.057998 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:31.558541 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:32.058329 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:32.558334 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:33.058677 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:33.559382 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:33.945317 2022781 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0813 03:32:33.954121 2022781 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0813 03:32:33.955027 2022781 api_server.go:139] control plane version: v1.21.3
	I0813 03:32:33.955048 2022781 api_server.go:129] duration metric: took 11.260858091s to wait for apiserver health ...
	I0813 03:32:33.955057 2022781 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 03:32:33.955075 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0813 03:32:33.955134 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0813 03:32:33.989217 2022781 cri.go:76] found id: "e34ccd127601990a58a3b0bf970ea62cbd1f776d1e31437a7826d21db94646c3"
	I0813 03:32:33.989241 2022781 cri.go:76] found id: ""
	I0813 03:32:33.989246 2022781 logs.go:270] 1 containers: [e34ccd127601990a58a3b0bf970ea62cbd1f776d1e31437a7826d21db94646c3]
	I0813 03:32:33.989289 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:33.991827 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0813 03:32:33.991873 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0813 03:32:34.015261 2022781 cri.go:76] found id: "3c1ce4b5f6d51d1115ca194a201f0f2319ae8953a147cd7517cafbb1fa677e11"
	I0813 03:32:34.015279 2022781 cri.go:76] found id: ""
	I0813 03:32:34.015285 2022781 logs.go:270] 1 containers: [3c1ce4b5f6d51d1115ca194a201f0f2319ae8953a147cd7517cafbb1fa677e11]
	I0813 03:32:34.015324 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:34.017874 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0813 03:32:34.017921 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0813 03:32:34.040647 2022781 cri.go:76] found id: "76df34c67e4d8322520aef9bf10ede58a153c61504ac00600249a57066b783fd"
	I0813 03:32:34.040664 2022781 cri.go:76] found id: ""
	I0813 03:32:34.040669 2022781 logs.go:270] 1 containers: [76df34c67e4d8322520aef9bf10ede58a153c61504ac00600249a57066b783fd]
	I0813 03:32:34.040711 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:34.043319 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0813 03:32:34.043370 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0813 03:32:34.059259 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:34.069010 2022781 cri.go:76] found id: "ecb0d384c34ed158fd62ade4f570a1ed8cfaca3ee6c53ab12a2de2f06720576b"
	I0813 03:32:34.069034 2022781 cri.go:76] found id: ""
	I0813 03:32:34.069040 2022781 logs.go:270] 1 containers: [ecb0d384c34ed158fd62ade4f570a1ed8cfaca3ee6c53ab12a2de2f06720576b]
	I0813 03:32:34.069080 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:34.071835 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0813 03:32:34.071887 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0813 03:32:34.095116 2022781 cri.go:76] found id: "b57e0dbb56f139a4e11ed434de0567755dd4ffdb08231f247c678c04a27d72ea"
	I0813 03:32:34.095139 2022781 cri.go:76] found id: ""
	I0813 03:32:34.095145 2022781 logs.go:270] 1 containers: [b57e0dbb56f139a4e11ed434de0567755dd4ffdb08231f247c678c04a27d72ea]
	I0813 03:32:34.095190 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:34.097821 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0813 03:32:34.097868 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0813 03:32:34.119306 2022781 cri.go:76] found id: ""
	I0813 03:32:34.119322 2022781 logs.go:270] 0 containers: []
	W0813 03:32:34.119328 2022781 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0813 03:32:34.119334 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0813 03:32:34.119379 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0813 03:32:34.142251 2022781 cri.go:76] found id: "f251119960206b400f3c5fa69a1475ae4017f1585aedfcae1ba3a5e41c29eaca"
	I0813 03:32:34.142273 2022781 cri.go:76] found id: ""
	I0813 03:32:34.142279 2022781 logs.go:270] 1 containers: [f251119960206b400f3c5fa69a1475ae4017f1585aedfcae1ba3a5e41c29eaca]
	I0813 03:32:34.142334 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:34.144992 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0813 03:32:34.145041 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0813 03:32:34.167211 2022781 cri.go:76] found id: "fb47330aab572a8f0160ba80c3c0231858b45b60f6be68e94341c50ea5e9a377"
	I0813 03:32:34.167227 2022781 cri.go:76] found id: ""
	I0813 03:32:34.167232 2022781 logs.go:270] 1 containers: [fb47330aab572a8f0160ba80c3c0231858b45b60f6be68e94341c50ea5e9a377]
	I0813 03:32:34.167272 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:34.169792 2022781 logs.go:123] Gathering logs for kube-controller-manager [fb47330aab572a8f0160ba80c3c0231858b45b60f6be68e94341c50ea5e9a377] ...
	I0813 03:32:34.169810 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb47330aab572a8f0160ba80c3c0231858b45b60f6be68e94341c50ea5e9a377"
	I0813 03:32:34.216311 2022781 logs.go:123] Gathering logs for containerd ...
	I0813 03:32:34.216383 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0813 03:32:34.298041 2022781 logs.go:123] Gathering logs for dmesg ...
	I0813 03:32:34.298069 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 03:32:34.310283 2022781 logs.go:123] Gathering logs for describe nodes ...
	I0813 03:32:34.310309 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 03:32:34.442777 2022781 logs.go:123] Gathering logs for kube-apiserver [e34ccd127601990a58a3b0bf970ea62cbd1f776d1e31437a7826d21db94646c3] ...
	I0813 03:32:34.442803 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e34ccd127601990a58a3b0bf970ea62cbd1f776d1e31437a7826d21db94646c3"
	I0813 03:32:34.491851 2022781 logs.go:123] Gathering logs for etcd [3c1ce4b5f6d51d1115ca194a201f0f2319ae8953a147cd7517cafbb1fa677e11] ...
	I0813 03:32:34.491910 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c1ce4b5f6d51d1115ca194a201f0f2319ae8953a147cd7517cafbb1fa677e11"
	I0813 03:32:34.521318 2022781 logs.go:123] Gathering logs for kube-proxy [b57e0dbb56f139a4e11ed434de0567755dd4ffdb08231f247c678c04a27d72ea] ...
	I0813 03:32:34.521347 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b57e0dbb56f139a4e11ed434de0567755dd4ffdb08231f247c678c04a27d72ea"
	I0813 03:32:34.544930 2022781 logs.go:123] Gathering logs for storage-provisioner [f251119960206b400f3c5fa69a1475ae4017f1585aedfcae1ba3a5e41c29eaca] ...
	I0813 03:32:34.544954 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f251119960206b400f3c5fa69a1475ae4017f1585aedfcae1ba3a5e41c29eaca"
	I0813 03:32:34.558879 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:34.568978 2022781 logs.go:123] Gathering logs for container status ...
	I0813 03:32:34.569002 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 03:32:34.595398 2022781 logs.go:123] Gathering logs for kubelet ...
	I0813 03:32:34.595422 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0813 03:32:34.648293 2022781 logs.go:138] Found kubelet problem: Aug 13 03:30:56 addons-20210813032940-2022292 kubelet[1185]: E0813 03:30:56.615910    1185 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210813032940-2022292" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-20210813032940-2022292' and this object
	W0813 03:32:34.648542 2022781 logs.go:138] Found kubelet problem: Aug 13 03:30:56 addons-20210813032940-2022292 kubelet[1185]: E0813 03:30:56.615970    1185 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-20210813032940-2022292" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-20210813032940-2022292' and this object
	I0813 03:32:34.694139 2022781 logs.go:123] Gathering logs for coredns [76df34c67e4d8322520aef9bf10ede58a153c61504ac00600249a57066b783fd] ...
	I0813 03:32:34.694164 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76df34c67e4d8322520aef9bf10ede58a153c61504ac00600249a57066b783fd"
	I0813 03:32:34.716899 2022781 logs.go:123] Gathering logs for kube-scheduler [ecb0d384c34ed158fd62ade4f570a1ed8cfaca3ee6c53ab12a2de2f06720576b] ...
	I0813 03:32:34.716924 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecb0d384c34ed158fd62ade4f570a1ed8cfaca3ee6c53ab12a2de2f06720576b"
	I0813 03:32:34.743237 2022781 out.go:311] Setting ErrFile to fd 2...
	I0813 03:32:34.743258 2022781 out.go:345] TERM=,COLORTERM=, which probably does not support color
	W0813 03:32:34.743376 2022781 out.go:242] X Problems detected in kubelet:
	W0813 03:32:34.743389 2022781 out.go:242]   Aug 13 03:30:56 addons-20210813032940-2022292 kubelet[1185]: E0813 03:30:56.615910    1185 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210813032940-2022292" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-20210813032940-2022292' and this object
	W0813 03:32:34.743396 2022781 out.go:242]   Aug 13 03:30:56 addons-20210813032940-2022292 kubelet[1185]: E0813 03:30:56.615970    1185 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-20210813032940-2022292" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-20210813032940-2022292' and this object
	I0813 03:32:34.743409 2022781 out.go:311] Setting ErrFile to fd 2...
	I0813 03:32:34.743414 2022781 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 03:32:35.059544 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:35.558823 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:36.058742 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:36.558321 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:37.059068 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:37.559539 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:38.059128 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:38.559174 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:39.058759 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:39.558560 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:40.059643 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:40.558958 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:41.058990 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:41.559162 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:42.057900 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:42.558835 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:43.059061 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:43.559393 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:44.058040 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:44.558542 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:44.758638 2022781 system_pods.go:59] 18 kube-system pods found
	I0813 03:32:44.758676 2022781 system_pods.go:61] "coredns-558bd4d5db-69x4l" [ef73518e-08da-4a27-a504-85f6e14fde4e] Running
	I0813 03:32:44.758682 2022781 system_pods.go:61] "csi-hostpath-attacher-0" [5b8c9e1d-36af-484a-8f71-8cbdc93e1848] Running
	I0813 03:32:44.758686 2022781 system_pods.go:61] "csi-hostpath-provisioner-0" [d285e137-046a-4c9f-8a5c-b513a07b4ac1] Running
	I0813 03:32:44.758691 2022781 system_pods.go:61] "csi-hostpath-resizer-0" [d005f1b6-ee22-4ef7-aabe-9ad79b904d8e] Running
	I0813 03:32:44.758696 2022781 system_pods.go:61] "csi-hostpath-snapshotter-0" [203926d6-73de-49f5-8477-3d1cf26d233e] Running
	I0813 03:32:44.758701 2022781 system_pods.go:61] "csi-hostpathplugin-0" [e5a40ad3-af6f-4aca-9d33-5d9620d28d85] Running
	I0813 03:32:44.758707 2022781 system_pods.go:61] "etcd-addons-20210813032940-2022292" [5e80a189-29fa-44b4-b290-7896746c4542] Running
	I0813 03:32:44.758712 2022781 system_pods.go:61] "kindnet-6qhgq" [41b60387-4d90-4496-a617-d04aaf6d654a] Running
	I0813 03:32:44.758717 2022781 system_pods.go:61] "kube-apiserver-addons-20210813032940-2022292" [e344bbc9-9190-49fe-915e-c8460a1fbe6e] Running
	I0813 03:32:44.758727 2022781 system_pods.go:61] "kube-controller-manager-addons-20210813032940-2022292" [d2701d00-a6a6-4ab5-b211-39592390ce8e] Running
	I0813 03:32:44.758731 2022781 system_pods.go:61] "kube-proxy-9knsw" [05bf3f71-808d-4e24-a416-a4434e16e0ac] Running
	I0813 03:32:44.758743 2022781 system_pods.go:61] "kube-scheduler-addons-20210813032940-2022292" [6488f7a4-94e7-41c1-b202-305d463dfac2] Running
	I0813 03:32:44.758747 2022781 system_pods.go:61] "metrics-server-77c99ccb96-vn6tn" [985bccb5-7c0b-4df0-91ce-0cd5e67a9688] Running
	I0813 03:32:44.758755 2022781 system_pods.go:61] "registry-5f6m6" [b842920d-03bf-4426-9765-5fb36b90afb9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0813 03:32:44.758768 2022781 system_pods.go:61] "registry-proxy-dg8n7" [3af763b1-1ec2-4a5b-b1c6-9f9a9c21d031] Running / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0813 03:32:44.758774 2022781 system_pods.go:61] "snapshot-controller-989f9ddc8-6wzsp" [4b25bcd7-a3bd-4549-9476-87a13b4022d1] Running
	I0813 03:32:44.758779 2022781 system_pods.go:61] "snapshot-controller-989f9ddc8-shj76" [0d371d4d-113a-4eb0-bb5d-4d52d2ecf7a5] Running
	I0813 03:32:44.758784 2022781 system_pods.go:61] "storage-provisioner" [9788a546-bd3b-45bb-98c8-f5dc3efa1001] Running
	I0813 03:32:44.758792 2022781 system_pods.go:74] duration metric: took 10.803729677s to wait for pod list to return data ...
	I0813 03:32:44.758802 2022781 default_sa.go:34] waiting for default service account to be created ...
	I0813 03:32:44.761387 2022781 default_sa.go:45] found service account: "default"
	I0813 03:32:44.761408 2022781 default_sa.go:55] duration metric: took 2.593402ms for default service account to be created ...
	I0813 03:32:44.761414 2022781 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 03:32:44.774394 2022781 system_pods.go:86] 18 kube-system pods found
	I0813 03:32:44.774420 2022781 system_pods.go:89] "coredns-558bd4d5db-69x4l" [ef73518e-08da-4a27-a504-85f6e14fde4e] Running
	I0813 03:32:44.774427 2022781 system_pods.go:89] "csi-hostpath-attacher-0" [5b8c9e1d-36af-484a-8f71-8cbdc93e1848] Running
	I0813 03:32:44.774432 2022781 system_pods.go:89] "csi-hostpath-provisioner-0" [d285e137-046a-4c9f-8a5c-b513a07b4ac1] Running
	I0813 03:32:44.774441 2022781 system_pods.go:89] "csi-hostpath-resizer-0" [d005f1b6-ee22-4ef7-aabe-9ad79b904d8e] Running
	I0813 03:32:44.774451 2022781 system_pods.go:89] "csi-hostpath-snapshotter-0" [203926d6-73de-49f5-8477-3d1cf26d233e] Running
	I0813 03:32:44.774456 2022781 system_pods.go:89] "csi-hostpathplugin-0" [e5a40ad3-af6f-4aca-9d33-5d9620d28d85] Running
	I0813 03:32:44.774464 2022781 system_pods.go:89] "etcd-addons-20210813032940-2022292" [5e80a189-29fa-44b4-b290-7896746c4542] Running
	I0813 03:32:44.774469 2022781 system_pods.go:89] "kindnet-6qhgq" [41b60387-4d90-4496-a617-d04aaf6d654a] Running
	I0813 03:32:44.774475 2022781 system_pods.go:89] "kube-apiserver-addons-20210813032940-2022292" [e344bbc9-9190-49fe-915e-c8460a1fbe6e] Running
	I0813 03:32:44.774484 2022781 system_pods.go:89] "kube-controller-manager-addons-20210813032940-2022292" [d2701d00-a6a6-4ab5-b211-39592390ce8e] Running
	I0813 03:32:44.774489 2022781 system_pods.go:89] "kube-proxy-9knsw" [05bf3f71-808d-4e24-a416-a4434e16e0ac] Running
	I0813 03:32:44.774497 2022781 system_pods.go:89] "kube-scheduler-addons-20210813032940-2022292" [6488f7a4-94e7-41c1-b202-305d463dfac2] Running
	I0813 03:32:44.774502 2022781 system_pods.go:89] "metrics-server-77c99ccb96-vn6tn" [985bccb5-7c0b-4df0-91ce-0cd5e67a9688] Running
	I0813 03:32:44.774514 2022781 system_pods.go:89] "registry-5f6m6" [b842920d-03bf-4426-9765-5fb36b90afb9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0813 03:32:44.774522 2022781 system_pods.go:89] "registry-proxy-dg8n7" [3af763b1-1ec2-4a5b-b1c6-9f9a9c21d031] Running / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0813 03:32:44.774530 2022781 system_pods.go:89] "snapshot-controller-989f9ddc8-6wzsp" [4b25bcd7-a3bd-4549-9476-87a13b4022d1] Running
	I0813 03:32:44.774536 2022781 system_pods.go:89] "snapshot-controller-989f9ddc8-shj76" [0d371d4d-113a-4eb0-bb5d-4d52d2ecf7a5] Running
	I0813 03:32:44.774545 2022781 system_pods.go:89] "storage-provisioner" [9788a546-bd3b-45bb-98c8-f5dc3efa1001] Running
	I0813 03:32:44.774550 2022781 system_pods.go:126] duration metric: took 13.132138ms to wait for k8s-apps to be running ...
	I0813 03:32:44.774559 2022781 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 03:32:44.774606 2022781 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 03:32:44.786726 2022781 system_svc.go:56] duration metric: took 12.16177ms WaitForService to wait for kubelet.
	I0813 03:32:44.786742 2022781 kubeadm.go:547] duration metric: took 1m47.594069487s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 03:32:44.786764 2022781 node_conditions.go:102] verifying NodePressure condition ...
	I0813 03:32:44.790042 2022781 node_conditions.go:122] node storage ephemeral capacity is 40474572Ki
	I0813 03:32:44.790072 2022781 node_conditions.go:123] node cpu capacity is 2
	I0813 03:32:44.790084 2022781 node_conditions.go:105] duration metric: took 3.315795ms to run NodePressure ...
	I0813 03:32:44.790093 2022781 start.go:231] waiting for startup goroutines ...
	I0813 03:32:45.059527 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:45.558450 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:46.058008 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:46.559409 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:47.058981 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:47.559418 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:48.058295 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:48.559417 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:49.058479 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:49.558767 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:50.059036 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:50.559080 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:51.058876 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:51.559246 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:52.059058 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:52.559792 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:53.059872 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:53.559369 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:54.059719 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:54.558692 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:55.059624 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:55.558707 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:56.058551 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:56.558705 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:57.059182 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:57.558288 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:58.058885 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:58.559208 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:59.058341 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:59.558602 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:00.058816 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:00.559084 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:01.065657 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:01.558657 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:02.059321 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:02.559541 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:03.059005 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:03.559631 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:04.058927 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:04.558678 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:05.059707 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:05.558780 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:06.058956 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:06.559528 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:07.059164 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:07.558609 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:08.059085 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:08.558274 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:09.058026 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:09.558984 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:10.058747 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:10.558098 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:11.058941 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:11.559199 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:12.059601 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:12.562803 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:13.058537 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:13.558131 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:14.059592 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:14.558844 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:15.059696 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:15.559326 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:16.059259 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:16.558421 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:17.058713 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:17.558903 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:18.059615 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:18.558863 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:19.058406 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:19.558672 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:20.059847 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:20.558399 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:21.137157 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:21.558717 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:22.059791 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:22.559467 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:23.058838 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:23.558887 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:24.060067 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:24.559548 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:25.058905 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:25.558800 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:26.059414 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:26.558703 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:27.059411 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:27.559626 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:28.059432 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:28.559361 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:29.059128 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:29.558985 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:30.062482 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:30.558231 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:31.058771 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:31.558406 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:32.059047 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:32.559142 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:33.058540 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:33.557931 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:34.058498 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:34.558400 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:35.058954 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:35.559196 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:36.059489 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:36.558425 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:37.059036 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:37.559521 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:38.059863 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:38.558898 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:39.059513 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:39.558045 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:40.059686 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:40.560770 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:41.059054 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:41.566863 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:42.059037 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:42.558754 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:43.060014 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:43.558325 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:44.058848 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:44.558941 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:45.059055 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:45.559095 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:46.059579 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:46.558476 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:47.059065 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:47.559727 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:48.058950 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:48.559632 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:49.075110 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:49.558722 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:50.058944 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:50.558547 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:51.058757 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:51.559264 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:52.059295 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:52.558567 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:53.059202 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:53.558807 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:54.066706 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:54.559004 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:55.059681 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:55.558717 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:56.059365 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:56.558292 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:57.059264 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:57.558279 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:58.058461 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:58.558727 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:59.059322 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:59.559163 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:00.058942 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:00.558919 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:01.058921 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:01.558792 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:02.058858 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:02.558354 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:03.058655 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:03.558982 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:04.059730 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:04.558307 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:05.059036 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:05.559489 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:06.059148 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:06.558284 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:07.059011 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:07.559645 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:08.059884 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:08.558543 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:09.059046 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:09.559657 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:10.059052 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:10.559013 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:11.058582 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:11.558297 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:12.059000 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:12.558873 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:13.059650 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:13.559364 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:14.059969 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:14.565763 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:15.059630 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:15.558772 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:16.059717 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:16.559317 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:17.059424 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:17.558716 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:18.058577 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:18.557995 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:19.059142 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:19.558516 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:20.058731 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:20.558597 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:21.058881 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:21.639918 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:22.059768 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:22.558594 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:23.058973 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:23.558827 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:24.059270 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:24.558647 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:25.059468 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:25.559237 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:26.058932 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:26.559092 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:27.059169 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:27.559544 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:28.064873 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:28.559416 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:29.058950 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:29.559331 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:30.059119 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:30.558882 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:31.059118 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:31.558810 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:32.058711 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:32.559098 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:33.059059 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:33.559380 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:34.059186 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:34.558928 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:35.059958 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:35.559653 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:36.059818 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:36.559509 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:37.058808 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:37.559390 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:38.059732 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:38.558788 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:39.059321 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:39.558685 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:40.069653 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:40.559011 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:41.058420 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:41.558616 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:42.058600 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:42.558619 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:43.058712 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:43.558863 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:44.059093 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:44.557925 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:45.059096 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:45.558755 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:46.059372 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:46.558486 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:47.058418 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:47.558666 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:48.059423 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:48.559159 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:49.059355 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:49.558764 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:50.058409 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:50.558418 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:51.058627 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:51.565864 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:52.059506 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:52.559260 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:53.059527 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:53.558611 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:54.058863 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:54.559270 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:55.058897 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:55.559362 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:56.059376 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:56.558455 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:57.058989 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:57.558669 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:58.061153 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:58.559011 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:59.058914 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:59.559446 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:00.059100 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:00.558679 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:01.059516 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:01.559073 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:02.058157 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:02.559296 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:03.062289 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:03.558892 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:04.059315 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:04.558372 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:05.058655 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:05.558400 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:06.059479 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:06.559682 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:07.059026 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:07.558682 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:08.059829 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:08.558168 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:09.058282 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:09.558248 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:10.058659 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:10.558729 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:11.058984 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:11.558444 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:12.063399 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:12.559537 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:13.058028 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:13.559081 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:14.062176 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:14.558669 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:15.058706 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:15.558218 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:16.059595 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:16.558859 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:17.063096 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:17.558100 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:18.058344 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:18.558730 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:19.058484 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:19.558765 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:20.058970 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:20.559332 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:21.058416 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:21.559555 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:22.058838 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:22.558992 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:23.059161 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:23.558649 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:24.059506 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:24.558809 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:25.058701 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:25.560059 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:26.066673 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:26.559251 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:27.058215 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:27.558720 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:28.059602 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:28.558925 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:29.059077 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:29.558468 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:30.058745 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:30.559228 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:31.057902 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:31.559288 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:32.058870 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:32.558785 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:33.059631 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:33.558142 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:34.058639 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:34.558642 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:35.058709 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:35.558816 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:36.058874 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:36.558786 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:37.069666 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:37.562046 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:38.059673 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:38.558403 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:39.058697 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:39.558063 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:40.059072 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:40.559148 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:41.058169 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:41.559113 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:42.060212 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:42.558264 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:43.058718 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:43.558468 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:44.059745 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:44.558972 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:45.059760 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:45.558794 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:46.059066 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:46.558262 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:47.058933 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:47.558864 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:48.059706 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:48.559293 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:49.059773 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:49.567350 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:50.058758 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:50.559247 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:51.058051 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:51.559512 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:52.059753 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:52.558876 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:53.058844 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:53.557996 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:54.059339 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:54.557861 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:55.058559 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:55.558563 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:56.059269 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:56.559290 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:57.058409 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:57.558536 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:58.059072 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:58.559398 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:59.058700 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:59.558646 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:00.058702 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:00.558682 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:01.058670 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:01.562293 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:02.058643 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:02.558304 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:03.059405 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:03.559047 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:04.062121 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:04.558683 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:05.059301 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:05.558453 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:06.059156 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:06.558705 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:07.059273 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:07.558538 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:08.059190 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:08.559068 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:09.059945 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:09.558740 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:10.059106 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:10.558472 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:11.059243 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:11.558710 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:12.059732 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:12.558567 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:13.059209 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:13.558794 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:14.063337 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:14.558442 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:15.058715 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:15.559174 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:16.058830 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:16.559240 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:17.058916 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:17.559538 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:18.058831 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:18.558826 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:19.058516 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:19.558183 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:20.058188 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:20.558110 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:21.058024 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:21.558235 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:22.058995 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:22.619655 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:23.059154 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:23.558587 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:24.059440 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:24.558332 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:25.059619 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:25.558688 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:26.062144 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:26.558312 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:27.059458 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:27.559085 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:28.058413 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:28.558317 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:29.059543 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:29.557925 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:30.058612 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:30.558558 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:31.058092 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:31.558239 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:32.059323 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:32.558001 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:33.059257 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:33.558742 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:34.058873 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:34.558418 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:35.058383 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:35.558235 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:36.058836 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:36.559150 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:37.059481 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:37.558194 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:38.058814 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:38.559211 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:39.059575 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:39.558350 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:40.058603 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:40.559147 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:41.059124 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:41.558651 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:42.059562 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:42.558506 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:43.058986 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:43.558651 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:44.059589 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:44.558516 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:45.059125 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:45.558154 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:46.058948 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:46.558804 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:47.059781 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:47.558728 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:48.059027 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:48.559730 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:49.059279 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:49.578391 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:50.058436 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:50.558971 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:51.058593 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:51.558295 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:52.058886 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:52.559329 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:53.058756 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:53.557754 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:54.059400 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:54.558199 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:55.058119 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:55.560244 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:56.059004 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:56.561964 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:57.059323 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:57.558414 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:58.058938 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:58.558617 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:59.059283 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:59.574886 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:37:00.059290 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:37:00.558331 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:37:01.058314 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:37:01.061314 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:37:01.061337 2022781 kapi.go:108] duration metric: took 6m0.046443715s to wait for kubernetes.io/minikube-addons=registry ...
	W0813 03:37:01.061452 2022781 out.go:242] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: timed out waiting for the condition]
	I0813 03:37:01.063789 2022781 out.go:177] * Enabled addons: metrics-server, default-storageclass, storage-provisioner, olm, volumesnapshots, gcp-auth, ingress, csi-hostpath-driver
	I0813 03:37:01.063812 2022781 addons.go:344] enableAddons completed in 6m3.870880492s
	I0813 03:37:01.394020 2022781 start.go:462] kubectl: 1.21.3, cluster: 1.21.3 (minor skew: 0)
	I0813 03:37:01.396720 2022781 out.go:177] * Done! kubectl is now configured to use "addons-20210813032940-2022292" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID
	50a125019a223       d544402579747       3 minutes ago       Exited              olm-operator                             6                   c13c7bddfc538
	eab8e5f488bb8       357aab9e21a8d       3 minutes ago       Running             registry                                 0                   e960d2aa7dc02
	2f2efca15de85       d544402579747       4 minutes ago       Exited              catalog-operator                         6                   4f10e3f5836dd
	42d861814766f       60dc18151daf8       4 minutes ago       Exited              registry-proxy                           6                   0f0d8cb5ccd61
	62187cb4e0d16       ab63026e5f864       9 minutes ago       Running             liveness-probe                           0                   b43814b8b8887
	3dcb14bafa42b       f8f69c8b53974       9 minutes ago       Running             hostpath                                 0                   b43814b8b8887
	92c18b0912a62       bac9ddccb0c70       9 minutes ago       Running             controller                               0                   7b09157779528
	02fdce8be21e9       1f46a863d2aa9       9 minutes ago       Running             node-driver-registrar                    0                   b43814b8b8887
	40dfe1174d820       ff9e753cbb985       9 minutes ago       Running             gcp-auth                                 0                   7482061f45a92
	52df130a57b86       69724f415cab8       9 minutes ago       Running             csi-attacher                             0                   dcaee0a8b6fe6
	95dcf4a47993d       a883f7fc35610       9 minutes ago       Exited              patch                                    0                   537ace0ace14b
	12613d1687ffb       e3597035e9357       9 minutes ago       Running             metrics-server                           0                   82923a3fd96f8
	3be8c7041c401       b4df90000e547       9 minutes ago       Running             csi-external-health-monitor-controller   0                   b43814b8b8887
	e912ae66fde6f       a883f7fc35610       9 minutes ago       Exited              create                                   0                   02f6733c69e7f
	b0cee48d425ff       622522dfd285b       9 minutes ago       Exited              patch                                    1                   15c8a46826231
	1754595f2fdf2       3758cfc26c6db       9 minutes ago       Running             volume-snapshot-controller               0                   219d6b17a14a0
	4aead0d458e8c       622522dfd285b       9 minutes ago       Exited              create                                   0                   ba614e5d3b610
	19875c1ae013c       3758cfc26c6db       9 minutes ago       Running             volume-snapshot-controller               0                   4e18de11d0147
	47c956ca2950d       d65cad97e5f05       9 minutes ago       Running             csi-snapshotter                          0                   d8a6db9443d9a
	76df34c67e4d8       1a1f05a2cd7c2       9 minutes ago       Running             coredns                                  0                   4013fbad24448
	1b913c149feb0       63f120615f44b       9 minutes ago       Running             csi-external-health-monitor-agent        0                   b43814b8b8887
	f251119960206       ba04bb24b9575       9 minutes ago       Running             storage-provisioner                      0                   4925d0c76d0fb
	f263194c3f6d4       803606888e0b1       9 minutes ago       Running             csi-resizer                              0                   74a1f629e668d
	09e888f82c90d       03c15ec36e257       9 minutes ago       Running             csi-provisioner                          0                   2ab213d5623d7
	b57e0dbb56f13       4ea38350a1beb       10 minutes ago      Running             kube-proxy                               0                   77766a5e4eba5
	e811021829de7       f37b7c809e5dc       10 minutes ago      Running             kindnet-cni                              0                   84d8cbe537f13
	fb47330aab572       cb310ff289d79       11 minutes ago      Running             kube-controller-manager                  0                   c35a71b0e178a
	e34ccd1276019       44a6d50ef170d       11 minutes ago      Running             kube-apiserver                           0                   802bb6c418a36
	3c1ce4b5f6d51       05b738aa1bc63       11 minutes ago      Running             etcd                                     0                   a0068af440460
	ecb0d384c34ed       31a3b96cefc1e       11 minutes ago      Running             kube-scheduler                           0                   6d9eb8373b6c3
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-08-13 03:29:47 UTC, end at Fri 2021-08-13 03:41:45 UTC. --
	Aug 13 03:37:56 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:37:56.955716827Z" level=info msg="TaskExit event &TaskExit{ContainerID:50a125019a2232ebba8be710db3a5a5a69e0b861aeeb6765098d1f0328d9b2b4,ID:50a125019a2232ebba8be710db3a5a5a69e0b861aeeb6765098d1f0328d9b2b4,Pid:9961,ExitStatus:1,ExitedAt:2021-08-13 03:37:56.952775278 +0000 UTC,XXX_unrecognized:[],}"
	Aug 13 03:37:56 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:37:56.994335837Z" level=info msg="shim disconnected" id=50a125019a2232ebba8be710db3a5a5a69e0b861aeeb6765098d1f0328d9b2b4
	Aug 13 03:37:56 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:37:56.994419324Z" level=error msg="copy shim log" error="read /proc/self/fd/243: file already closed"
	Aug 13 03:37:57 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:37:57.129266757Z" level=info msg="RemoveContainer for \"13cd2e8a9ba0e35f8d655c966d78493f1135d32b9e87c29b38c0206fce73a2e6\""
	Aug 13 03:37:57 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:37:57.139636226Z" level=info msg="RemoveContainer for \"13cd2e8a9ba0e35f8d655c966d78493f1135d32b9e87c29b38c0206fce73a2e6\" returns successfully"
	Aug 13 03:38:08 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:38:08.277515084Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:registry-test,Uid:40035019-666d-4c7e-b5c2-8ad5e55c8196,Namespace:default,Attempt:0,}"
	Aug 13 03:38:08 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:38:08.377675546Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/eed8a957e73482ffa4e815dea19c4b12bdb98916161c26d1173df76e82abe04f pid=10113
	Aug 13 03:38:08 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:38:08.470829723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:registry-test,Uid:40035019-666d-4c7e-b5c2-8ad5e55c8196,Namespace:default,Attempt:0,} returns sandbox id \"eed8a957e73482ffa4e815dea19c4b12bdb98916161c26d1173df76e82abe04f\""
	Aug 13 03:38:08 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:38:08.472435082Z" level=info msg="PullImage \"busybox:latest\""
	Aug 13 03:38:09 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:38:09.425381939Z" level=error msg="PullImage \"busybox:latest\" failed" error="failed to pull and unpack image \"docker.io/library/busybox:latest\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:0f354ec1728d9ff32edcd7d1b8bbdfc798277ad36120dc3dc683be44524c8b60: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 13 03:38:22 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:38:22.845431441Z" level=info msg="PullImage \"busybox:latest\""
	Aug 13 03:38:23 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:38:23.783094914Z" level=error msg="PullImage \"busybox:latest\" failed" error="failed to pull and unpack image \"docker.io/library/busybox:latest\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:0f354ec1728d9ff32edcd7d1b8bbdfc798277ad36120dc3dc683be44524c8b60: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 13 03:38:51 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:38:51.842193745Z" level=info msg="PullImage \"busybox:latest\""
	Aug 13 03:38:52 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:38:52.773430795Z" level=error msg="PullImage \"busybox:latest\" failed" error="failed to pull and unpack image \"docker.io/library/busybox:latest\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:0f354ec1728d9ff32edcd7d1b8bbdfc798277ad36120dc3dc683be44524c8b60: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 13 03:39:08 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:39:08.845127309Z" level=info msg="StopPodSandbox for \"eed8a957e73482ffa4e815dea19c4b12bdb98916161c26d1173df76e82abe04f\""
	Aug 13 03:39:08 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:39:08.860210846Z" level=info msg="TaskExit event &TaskExit{ContainerID:eed8a957e73482ffa4e815dea19c4b12bdb98916161c26d1173df76e82abe04f,ID:eed8a957e73482ffa4e815dea19c4b12bdb98916161c26d1173df76e82abe04f,Pid:10133,ExitStatus:137,ExitedAt:2021-08-13 03:39:08.860011421 +0000 UTC,XXX_unrecognized:[],}"
	Aug 13 03:39:08 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:39:08.896425586Z" level=info msg="shim disconnected" id=eed8a957e73482ffa4e815dea19c4b12bdb98916161c26d1173df76e82abe04f
	Aug 13 03:39:08 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:39:08.897134613Z" level=error msg="copy shim log" error="read /proc/self/fd/93: file already closed"
	Aug 13 03:39:08 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:39:08.936975346Z" level=info msg="TearDown network for sandbox \"eed8a957e73482ffa4e815dea19c4b12bdb98916161c26d1173df76e82abe04f\" successfully"
	Aug 13 03:39:08 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:39:08.937025864Z" level=info msg="StopPodSandbox for \"eed8a957e73482ffa4e815dea19c4b12bdb98916161c26d1173df76e82abe04f\" returns successfully"
	Aug 13 03:39:50 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:39:50.834364545Z" level=info msg="StopPodSandbox for \"eed8a957e73482ffa4e815dea19c4b12bdb98916161c26d1173df76e82abe04f\""
	Aug 13 03:39:50 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:39:50.855296084Z" level=info msg="TearDown network for sandbox \"eed8a957e73482ffa4e815dea19c4b12bdb98916161c26d1173df76e82abe04f\" successfully"
	Aug 13 03:39:50 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:39:50.855346348Z" level=info msg="StopPodSandbox for \"eed8a957e73482ffa4e815dea19c4b12bdb98916161c26d1173df76e82abe04f\" returns successfully"
	Aug 13 03:39:50 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:39:50.855724225Z" level=info msg="RemovePodSandbox for \"eed8a957e73482ffa4e815dea19c4b12bdb98916161c26d1173df76e82abe04f\""
	Aug 13 03:39:50 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:39:50.862443002Z" level=info msg="RemovePodSandbox \"eed8a957e73482ffa4e815dea19c4b12bdb98916161c26d1173df76e82abe04f\" returns successfully"
	
	* 
	* ==> coredns [76df34c67e4d8322520aef9bf10ede58a153c61504ac00600249a57066b783fd] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/arm64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               addons-20210813032940-2022292
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-20210813032940-2022292
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dc1c3ca26e9449ce488a773126b8450402c94a19
	                    minikube.k8s.io/name=addons-20210813032940-2022292
	                    minikube.k8s.io/updated_at=2021_08_13T03_30_43_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-20210813032940-2022292
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-20210813032940-2022292"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 03:30:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-20210813032940-2022292
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 03:41:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 03:38:23 +0000   Fri, 13 Aug 2021 03:30:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 03:38:23 +0000   Fri, 13 Aug 2021 03:30:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 03:38:23 +0000   Fri, 13 Aug 2021 03:30:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 03:38:23 +0000   Fri, 13 Aug 2021 03:31:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-20210813032940-2022292
	Capacity:
	  cpu:                2
	  ephemeral-storage:  40474572Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033460Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  40474572Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033460Ki
	  pods:               110
	System Info:
	  Machine ID:                 80c525a0c99c4bf099c0cbf9c365b032
	  System UUID:                cd349576-1400-4f29-881c-2488bb4cb8bc
	  Boot ID:                    0b91f2d0-31de-4b03-9973-67e3d0024ffb
	  Kernel Version:             5.8.0-1041-aws
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.4.6
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (22 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-5954cc4898-75xcj                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  ingress-nginx               ingress-nginx-controller-59b45fb494-2m89h                100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (1%!)(MISSING)        0 (0%!)(MISSING)         10m
	  kube-system                 coredns-558bd4d5db-69x4l                                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     10m
	  kube-system                 csi-hostpath-attacher-0                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 csi-hostpath-provisioner-0                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 csi-hostpath-resizer-0                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 csi-hostpath-snapshotter-0                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 csi-hostpathplugin-0                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-addons-20210813032940-2022292                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-6qhgq                                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      10m
	  kube-system                 kube-apiserver-addons-20210813032940-2022292             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-addons-20210813032940-2022292    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-9knsw                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-addons-20210813032940-2022292             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 metrics-server-77c99ccb96-vn6tn                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      300Mi (3%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 registry-5f6m6                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 registry-proxy-dg8n7                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 snapshot-controller-989f9ddc8-6wzsp                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 snapshot-controller-989f9ddc8-shj76                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  olm                         catalog-operator-75d496484d-xh6n8                        10m (0%!)(MISSING)      0 (0%!)(MISSING)      80Mi (1%!)(MISSING)        0 (0%!)(MISSING)         10m
	  olm                         olm-operator-859c88c96-whcps                             10m (0%!)(MISSING)      0 (0%!)(MISSING)      160Mi (2%!)(MISSING)       0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1070m (53%!)(MISSING)  100m (5%!)(MISSING)
	  memory             850Mi (10%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 11m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x5 over 11m)  kubelet     Node addons-20210813032940-2022292 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x4 over 11m)  kubelet     Node addons-20210813032940-2022292 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x4 over 11m)  kubelet     Node addons-20210813032940-2022292 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m                kubelet     Node addons-20210813032940-2022292 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                kubelet     Node addons-20210813032940-2022292 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                kubelet     Node addons-20210813032940-2022292 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                10m                kubelet     Node addons-20210813032940-2022292 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Aug13 02:55] systemd-journald[174]: Failed to send stream file descriptor to service manager: Connection refused
	
	* 
	* ==> etcd [3c1ce4b5f6d51d1115ca194a201f0f2319ae8953a147cd7517cafbb1fa677e11] <==
	* 2021-08-13 03:38:02.903918 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:38:12.903740 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:38:22.903892 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:38:32.903790 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:38:42.903256 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:38:52.903491 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:39:02.903791 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:39:12.903445 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:39:22.904262 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:39:32.903820 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:39:42.903648 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:39:52.903811 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:40:02.903969 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:40:12.903415 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:40:22.903591 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:40:32.903593 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:40:34.453326 I | mvcc: store.index: compact 1582
	2021-08-13 03:40:34.477417 I | mvcc: finished scheduled compaction at 1582 (took 23.561071ms)
	2021-08-13 03:40:42.903861 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:40:52.903736 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:41:02.903935 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:41:12.903583 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:41:22.904196 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:41:32.904054 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:41:42.904253 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  03:41:46 up 13:24,  0 users,  load average: 0.68, 0.76, 1.75
	Linux addons-20210813032940-2022292 5.8.0-1041-aws #43~20.04.1-Ubuntu SMP Thu Jul 15 11:03:27 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [e34ccd127601990a58a3b0bf970ea62cbd1f776d1e31437a7826d21db94646c3] <==
	* I0813 03:36:04.057040       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 03:36:42.910903       1 client.go:360] parsed scheme: "passthrough"
	I0813 03:36:42.910944       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 03:36:42.910953       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 03:37:22.534168       1 client.go:360] parsed scheme: "passthrough"
	I0813 03:37:22.534207       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 03:37:22.534217       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 03:38:01.255418       1 client.go:360] parsed scheme: "passthrough"
	I0813 03:38:01.255459       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 03:38:01.255467       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 03:38:37.204887       1 client.go:360] parsed scheme: "passthrough"
	I0813 03:38:37.204930       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 03:38:37.205037       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 03:39:14.667697       1 client.go:360] parsed scheme: "passthrough"
	I0813 03:39:14.667735       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 03:39:14.667743       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 03:39:54.022964       1 client.go:360] parsed scheme: "passthrough"
	I0813 03:39:54.023006       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 03:39:54.023015       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 03:40:35.324248       1 client.go:360] parsed scheme: "passthrough"
	I0813 03:40:35.324288       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 03:40:35.324297       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 03:41:14.261225       1 client.go:360] parsed scheme: "passthrough"
	I0813 03:41:14.261355       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 03:41:14.261369       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [fb47330aab572a8f0160ba80c3c0231858b45b60f6be68e94341c50ea5e9a377] <==
	* I0813 03:31:04.720931       1 event.go:291] "Event occurred" object="kube-system/csi-hostpath-provisioner" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful"
	I0813 03:31:04.807501       1 event.go:291] "Event occurred" object="kube-system/csi-hostpath-resizer" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful"
	I0813 03:31:05.020471       1 event.go:291] "Event occurred" object="kube-system/csi-hostpath-snapshotter" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod csi-hostpath-snapshotter-0 in StatefulSet csi-hostpath-snapshotter successful"
	E0813 03:31:26.035949       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0813 03:31:26.036173       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for clusterserviceversions.operators.coreos.com
	I0813 03:31:26.036214       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for installplans.operators.coreos.com
	I0813 03:31:26.036251       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for catalogsources.operators.coreos.com
	I0813 03:31:26.036278       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for volumesnapshots.snapshot.storage.k8s.io
	I0813 03:31:26.036308       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for operatorgroups.operators.coreos.com
	I0813 03:31:26.036363       1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for subscriptions.operators.coreos.com
	I0813 03:31:26.036439       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I0813 03:31:26.237317       1 shared_informer.go:247] Caches are synced for resource quota 
	W0813 03:31:26.556122       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0813 03:31:26.568634       1 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0813 03:31:26.572210       1 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0813 03:31:26.573483       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0813 03:31:26.674215       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 03:31:40.971505       1 event.go:291] "Event occurred" object="kube-system/registry-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: registry-proxy-dg8n7"
	I0813 03:31:45.796766       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	E0813 03:31:56.268956       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 03:31:56.697258       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	I0813 03:32:03.791215       1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0813 03:32:04.587722       1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0813 03:32:05.163029       1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-admission-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0813 03:32:09.178149       1 event.go:291] "Event occurred" object="ingress-nginx/ingress-nginx-admission-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	
	* 
	* ==> kube-proxy [b57e0dbb56f139a4e11ed434de0567755dd4ffdb08231f247c678c04a27d72ea] <==
	* I0813 03:30:58.916090       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0813 03:30:58.916147       1 server_others.go:140] Detected node IP 192.168.49.2
	W0813 03:30:58.916169       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0813 03:30:59.027458       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0813 03:30:59.027492       1 server_others.go:212] Using iptables Proxier.
	I0813 03:30:59.027502       1 server_others.go:219] creating dualStackProxier for iptables.
	W0813 03:30:59.027515       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0813 03:30:59.027867       1 server.go:643] Version: v1.21.3
	I0813 03:30:59.037124       1 config.go:315] Starting service config controller
	I0813 03:30:59.037136       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 03:30:59.037153       1 config.go:224] Starting endpoint slice config controller
	I0813 03:30:59.037156       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 03:30:59.040531       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 03:30:59.051028       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 03:30:59.141975       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0813 03:30:59.142026       1 shared_informer.go:247] Caches are synced for service config 
	W0813 03:36:56.043703       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	
	* 
	* ==> kube-scheduler [ecb0d384c34ed158fd62ade4f570a1ed8cfaca3ee6c53ab12a2de2f06720576b] <==
	* W0813 03:30:39.532366       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0813 03:30:39.532414       1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0813 03:30:39.532440       1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0813 03:30:39.532453       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0813 03:30:39.630521       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 03:30:39.630629       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 03:30:39.634320       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0813 03:30:39.634682       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0813 03:30:39.656905       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 03:30:39.657167       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 03:30:39.657219       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 03:30:39.657322       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 03:30:39.657373       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 03:30:39.657425       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 03:30:39.657469       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 03:30:39.657522       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 03:30:39.657586       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 03:30:39.657632       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 03:30:39.657673       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 03:30:39.657783       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 03:30:39.660525       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 03:30:39.673014       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 03:30:40.516857       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 03:30:40.593078       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0813 03:30:40.931616       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 03:29:47 UTC, end at Fri 2021-08-13 03:41:46 UTC. --
	Aug 13 03:40:52 addons-20210813032940-2022292 kubelet[1185]: E0813 03:40:52.844222    1185 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=olm-operator pod=olm-operator-859c88c96-whcps_olm(9dfb17b5-db48-44a1-8daf-33ce6de73034)\"" pod="olm/olm-operator-859c88c96-whcps" podUID=9dfb17b5-db48-44a1-8daf-33ce6de73034
	Aug 13 03:41:03 addons-20210813032940-2022292 kubelet[1185]: I0813 03:41:03.840889    1185 scope.go:111] "RemoveContainer" containerID="50a125019a2232ebba8be710db3a5a5a69e0b861aeeb6765098d1f0328d9b2b4"
	Aug 13 03:41:03 addons-20210813032940-2022292 kubelet[1185]: E0813 03:41:03.841314    1185 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=olm-operator pod=olm-operator-859c88c96-whcps_olm(9dfb17b5-db48-44a1-8daf-33ce6de73034)\"" pod="olm/olm-operator-859c88c96-whcps" podUID=9dfb17b5-db48-44a1-8daf-33ce6de73034
	Aug 13 03:41:03 addons-20210813032940-2022292 kubelet[1185]: I0813 03:41:03.841327    1185 scope.go:111] "RemoveContainer" containerID="42d861814766fb5415c401654a889f4b54c30e1fe9361115c2dc3e8a3e09a0eb"
	Aug 13 03:41:03 addons-20210813032940-2022292 kubelet[1185]: E0813 03:41:03.841585    1185 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=registry-proxy pod=registry-proxy-dg8n7_kube-system(3af763b1-1ec2-4a5b-b1c6-9f9a9c21d031)\"" pod="kube-system/registry-proxy-dg8n7" podUID=3af763b1-1ec2-4a5b-b1c6-9f9a9c21d031
	Aug 13 03:41:07 addons-20210813032940-2022292 kubelet[1185]: I0813 03:41:07.841242    1185 scope.go:111] "RemoveContainer" containerID="2f2efca15de857027d89a08f9c662eb77a41f8059560749dc832a83171fecf62"
	Aug 13 03:41:07 addons-20210813032940-2022292 kubelet[1185]: E0813 03:41:07.841642    1185 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-xh6n8_olm(2a58a6fd-48ea-44a7-884d-f814b730c87a)\"" pod="olm/catalog-operator-75d496484d-xh6n8" podUID=2a58a6fd-48ea-44a7-884d-f814b730c87a
	Aug 13 03:41:14 addons-20210813032940-2022292 kubelet[1185]: I0813 03:41:14.840370    1185 scope.go:111] "RemoveContainer" containerID="42d861814766fb5415c401654a889f4b54c30e1fe9361115c2dc3e8a3e09a0eb"
	Aug 13 03:41:14 addons-20210813032940-2022292 kubelet[1185]: E0813 03:41:14.841052    1185 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=registry-proxy pod=registry-proxy-dg8n7_kube-system(3af763b1-1ec2-4a5b-b1c6-9f9a9c21d031)\"" pod="kube-system/registry-proxy-dg8n7" podUID=3af763b1-1ec2-4a5b-b1c6-9f9a9c21d031
	Aug 13 03:41:17 addons-20210813032940-2022292 kubelet[1185]: I0813 03:41:17.840889    1185 scope.go:111] "RemoveContainer" containerID="50a125019a2232ebba8be710db3a5a5a69e0b861aeeb6765098d1f0328d9b2b4"
	Aug 13 03:41:17 addons-20210813032940-2022292 kubelet[1185]: E0813 03:41:17.841273    1185 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=olm-operator pod=olm-operator-859c88c96-whcps_olm(9dfb17b5-db48-44a1-8daf-33ce6de73034)\"" pod="olm/olm-operator-859c88c96-whcps" podUID=9dfb17b5-db48-44a1-8daf-33ce6de73034
	Aug 13 03:41:18 addons-20210813032940-2022292 kubelet[1185]: I0813 03:41:18.840998    1185 scope.go:111] "RemoveContainer" containerID="2f2efca15de857027d89a08f9c662eb77a41f8059560749dc832a83171fecf62"
	Aug 13 03:41:18 addons-20210813032940-2022292 kubelet[1185]: E0813 03:41:18.841317    1185 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-xh6n8_olm(2a58a6fd-48ea-44a7-884d-f814b730c87a)\"" pod="olm/catalog-operator-75d496484d-xh6n8" podUID=2a58a6fd-48ea-44a7-884d-f814b730c87a
	Aug 13 03:41:27 addons-20210813032940-2022292 kubelet[1185]: I0813 03:41:27.840661    1185 scope.go:111] "RemoveContainer" containerID="42d861814766fb5415c401654a889f4b54c30e1fe9361115c2dc3e8a3e09a0eb"
	Aug 13 03:41:27 addons-20210813032940-2022292 kubelet[1185]: E0813 03:41:27.840924    1185 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=registry-proxy pod=registry-proxy-dg8n7_kube-system(3af763b1-1ec2-4a5b-b1c6-9f9a9c21d031)\"" pod="kube-system/registry-proxy-dg8n7" podUID=3af763b1-1ec2-4a5b-b1c6-9f9a9c21d031
	Aug 13 03:41:29 addons-20210813032940-2022292 kubelet[1185]: I0813 03:41:29.840549    1185 scope.go:111] "RemoveContainer" containerID="2f2efca15de857027d89a08f9c662eb77a41f8059560749dc832a83171fecf62"
	Aug 13 03:41:29 addons-20210813032940-2022292 kubelet[1185]: E0813 03:41:29.841834    1185 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-xh6n8_olm(2a58a6fd-48ea-44a7-884d-f814b730c87a)\"" pod="olm/catalog-operator-75d496484d-xh6n8" podUID=2a58a6fd-48ea-44a7-884d-f814b730c87a
	Aug 13 03:41:30 addons-20210813032940-2022292 kubelet[1185]: I0813 03:41:30.842013    1185 scope.go:111] "RemoveContainer" containerID="50a125019a2232ebba8be710db3a5a5a69e0b861aeeb6765098d1f0328d9b2b4"
	Aug 13 03:41:30 addons-20210813032940-2022292 kubelet[1185]: E0813 03:41:30.842371    1185 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=olm-operator pod=olm-operator-859c88c96-whcps_olm(9dfb17b5-db48-44a1-8daf-33ce6de73034)\"" pod="olm/olm-operator-859c88c96-whcps" podUID=9dfb17b5-db48-44a1-8daf-33ce6de73034
	Aug 13 03:41:39 addons-20210813032940-2022292 kubelet[1185]: I0813 03:41:39.840982    1185 scope.go:111] "RemoveContainer" containerID="42d861814766fb5415c401654a889f4b54c30e1fe9361115c2dc3e8a3e09a0eb"
	Aug 13 03:41:39 addons-20210813032940-2022292 kubelet[1185]: E0813 03:41:39.841639    1185 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=registry-proxy pod=registry-proxy-dg8n7_kube-system(3af763b1-1ec2-4a5b-b1c6-9f9a9c21d031)\"" pod="kube-system/registry-proxy-dg8n7" podUID=3af763b1-1ec2-4a5b-b1c6-9f9a9c21d031
	Aug 13 03:41:41 addons-20210813032940-2022292 kubelet[1185]: I0813 03:41:41.840929    1185 scope.go:111] "RemoveContainer" containerID="2f2efca15de857027d89a08f9c662eb77a41f8059560749dc832a83171fecf62"
	Aug 13 03:41:41 addons-20210813032940-2022292 kubelet[1185]: E0813 03:41:41.841292    1185 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-xh6n8_olm(2a58a6fd-48ea-44a7-884d-f814b730c87a)\"" pod="olm/catalog-operator-75d496484d-xh6n8" podUID=2a58a6fd-48ea-44a7-884d-f814b730c87a
	Aug 13 03:41:42 addons-20210813032940-2022292 kubelet[1185]: I0813 03:41:42.841145    1185 scope.go:111] "RemoveContainer" containerID="50a125019a2232ebba8be710db3a5a5a69e0b861aeeb6765098d1f0328d9b2b4"
	Aug 13 03:41:42 addons-20210813032940-2022292 kubelet[1185]: E0813 03:41:42.841850    1185 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=olm-operator pod=olm-operator-859c88c96-whcps_olm(9dfb17b5-db48-44a1-8daf-33ce6de73034)\"" pod="olm/olm-operator-859c88c96-whcps" podUID=9dfb17b5-db48-44a1-8daf-33ce6de73034
	
	* 
	* ==> storage-provisioner [f251119960206b400f3c5fa69a1475ae4017f1585aedfcae1ba3a5e41c29eaca] <==
	* I0813 03:31:53.616300       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 03:31:53.654387       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 03:31:53.654428       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 03:31:53.684054       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 03:31:53.688430       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-20210813032940-2022292_487120fb-5274-456a-8b7e-33f90e734a44!
	I0813 03:31:53.696297       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e7447bc9-2c2e-4fe9-978d-7328239a1c68", APIVersion:"v1", ResourceVersion:"1029", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-20210813032940-2022292_487120fb-5274-456a-8b7e-33f90e734a44 became leader
	I0813 03:31:53.792536       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-20210813032940-2022292_487120fb-5274-456a-8b7e-33f90e734a44!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-20210813032940-2022292 -n addons-20210813032940-2022292
helpers_test.go:262: (dbg) Run:  kubectl --context addons-20210813032940-2022292 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: gcp-auth-certs-create-465t2 gcp-auth-certs-patch-lgltv ingress-nginx-admission-create-r7rsv ingress-nginx-admission-patch-2wdhx
helpers_test.go:273: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context addons-20210813032940-2022292 describe pod gcp-auth-certs-create-465t2 gcp-auth-certs-patch-lgltv ingress-nginx-admission-create-r7rsv ingress-nginx-admission-patch-2wdhx
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context addons-20210813032940-2022292 describe pod gcp-auth-certs-create-465t2 gcp-auth-certs-patch-lgltv ingress-nginx-admission-create-r7rsv ingress-nginx-admission-patch-2wdhx: exit status 1 (79.161904ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-certs-create-465t2" not found
	Error from server (NotFound): pods "gcp-auth-certs-patch-lgltv" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-r7rsv" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-2wdhx" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context addons-20210813032940-2022292 describe pod gcp-auth-certs-create-465t2 gcp-auth-certs-patch-lgltv ingress-nginx-admission-create-r7rsv ingress-nginx-admission-patch-2wdhx: exit status 1
--- FAIL: TestAddons/parallel/Registry (285.72s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (243.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:158: (dbg) TestAddons/parallel/Ingress: waiting 12m0s for pods matching "app.kubernetes.io/name=ingress-nginx" in namespace "ingress-nginx" ...
helpers_test.go:343: "ingress-nginx-admission-create-r7rsv" [ee69fb49-eb9b-4d2c-948e-36b868e03e6c] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:158: (dbg) TestAddons/parallel/Ingress: app.kubernetes.io/name=ingress-nginx healthy within 14.965183ms
addons_test.go:165: (dbg) Run:  kubectl --context addons-20210813032940-2022292 replace --force -f testdata/nginx-ingv1.yaml
addons_test.go:180: (dbg) Run:  kubectl --context addons-20210813032940-2022292 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:185: (dbg) TestAddons/parallel/Ingress: waiting 4m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:343: "nginx" [15d8912a-aaaa-4e7f-9212-a8819a810920] Pending
helpers_test.go:343: "nginx" [15d8912a-aaaa-4e7f-9212-a8819a810920] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:185: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 4m0s: timed out waiting for the condition ****
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-20210813032940-2022292 -n addons-20210813032940-2022292
addons_test.go:185: TestAddons/parallel/Ingress: showing logs for failed pods as of 2021-08-13 03:53:00.433237883 +0000 UTC m=+1478.445297076
addons_test.go:185: (dbg) Run:  kubectl --context addons-20210813032940-2022292 describe po nginx -n default
addons_test.go:185: (dbg) kubectl --context addons-20210813032940-2022292 describe po nginx -n default:
Name:         nginx
Namespace:    default
Priority:     0
Node:         addons-20210813032940-2022292/192.168.49.2
Start Time:   Fri, 13 Aug 2021 03:49:00 +0000
Labels:       run=nginx
Annotations:  <none>
Status:       Pending
IP:           10.244.0.25
IPs:
IP:  10.244.0.25
Containers:
nginx:
Container ID:   
Image:          nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n6rkg (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
kube-api-access-n6rkg:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  4m                     default-scheduler  Successfully assigned default/nginx to addons-20210813032940-2022292
Warning  Failed     3m46s                  kubelet            Failed to pull image "nginx:alpine": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:93be99beb7ac44e27734270778f5a32b7484d1acadbac0a1a33ab100c8b6d5be: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   Pulling    2m36s (x4 over 4m)     kubelet            Pulling image "nginx:alpine"
Warning  Failed     2m35s (x3 over 3m59s)  kubelet            Failed to pull image "nginx:alpine": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     2m35s (x4 over 3m59s)  kubelet            Error: ErrImagePull
Warning  Failed     2m11s (x6 over 3m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    119s (x7 over 3m59s)   kubelet            Back-off pulling image "nginx:alpine"
addons_test.go:185: (dbg) Run:  kubectl --context addons-20210813032940-2022292 logs nginx -n default
addons_test.go:185: (dbg) Non-zero exit: kubectl --context addons-20210813032940-2022292 logs nginx -n default: exit status 1 (98.228261ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:185: kubectl --context addons-20210813032940-2022292 logs nginx -n default: exit status 1
addons_test.go:186: failed waiting for ngnix pod: run=nginx within 4m0s: timed out waiting for the condition
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect addons-20210813032940-2022292
helpers_test.go:236: (dbg) docker inspect addons-20210813032940-2022292:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5eb115611cc3c203dd18e5bcf8bd911508a396ad77a4442055cb4b3d330b1212",
	        "Created": "2021-08-13T03:29:46.326395701Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2023217,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T03:29:46.770105005Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ba5ae658d5b3f017bdb597cc46a1912d5eed54239e31b777788d204fdcbc4445",
	        "ResolvConfPath": "/var/lib/docker/containers/5eb115611cc3c203dd18e5bcf8bd911508a396ad77a4442055cb4b3d330b1212/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5eb115611cc3c203dd18e5bcf8bd911508a396ad77a4442055cb4b3d330b1212/hostname",
	        "HostsPath": "/var/lib/docker/containers/5eb115611cc3c203dd18e5bcf8bd911508a396ad77a4442055cb4b3d330b1212/hosts",
	        "LogPath": "/var/lib/docker/containers/5eb115611cc3c203dd18e5bcf8bd911508a396ad77a4442055cb4b3d330b1212/5eb115611cc3c203dd18e5bcf8bd911508a396ad77a4442055cb4b3d330b1212-json.log",
	        "Name": "/addons-20210813032940-2022292",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-20210813032940-2022292:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-20210813032940-2022292",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a84c1c1faf655e022ea96b0cbfa5780f6b48eafe43cd340b60f563833149b80f-init/diff:/var/lib/docker/overlay2/7eab3572859d93b266e01c53f7180a9b812a9352d6d9de9a250b7c08853896bd/diff:/var/lib/docker/overlay2/735c75d71cfc18e90e119a4cbda44b5328f80ee140097a56e4b8d56d1d73296a/diff:/var/lib/docker/overlay2/a3e21a33abd0bc635f6c01d5065127b0c6ae8648e27621bc2af8480371e0e000/diff:/var/lib/docker/overlay2/81573b84b43b2908098dbf411f4127aea8745e37aa0ee2f3bcf32f2378aef923/diff:/var/lib/docker/overlay2/633406c91e496c6ee40740050d85641e9c1f2bf787ba64a82f892910362ceeb3/diff:/var/lib/docker/overlay2/deb8d862aaef5e3fc2ec77b3f1839b07c4f6998399f4f111cd38226c004f70b0/diff:/var/lib/docker/overlay2/57b3638e691861d96d431a19402174c1139d2ff0280c08c71a81a8fcf9390e79/diff:/var/lib/docker/overlay2/6e43f99fe3b29b8ef7a4f065a75009878de2e2c2f4298c42eaf887f7602bbc6e/diff:/var/lib/docker/overlay2/cf9d28926b8190588c7af7d8b25156aee75f2abd04071b6e2a0a0fbf2e143dee/diff:/var/lib/docker/overlay2/6aa317
1af6f20f0682732cc4019152e4d5b0846e1ebda0a27c41c772e1cde011/diff:/var/lib/docker/overlay2/868a81f13eb2fedd1a1cb40eaf1c94ba3507a2ce88acff3fbbe9324b52a4b161/diff:/var/lib/docker/overlay2/162214348b4cea5219287565f6d7e0dd459b26bcc50e3db36cf72c667b547528/diff:/var/lib/docker/overlay2/9dbad12bae2f76b71152f7b4515e05d4b998ecec3e6ee896abcec7a80dcd2bea/diff:/var/lib/docker/overlay2/6cabd7857a22f00b0aba07331d6ccd89db9770531c0aa2f6fe5dd0f2cfdf0571/diff:/var/lib/docker/overlay2/d37830ed714a3f12f75bdb0787ab6a0b95fa84f6f2ba7cfce7c0088eae46490b/diff:/var/lib/docker/overlay2/d1f89b0ec8b42bfa6422a1c60a32bf10de45dc549f369f5a7cab728a58edc9f6/diff:/var/lib/docker/overlay2/23f19b760877b914dfe08fbc57f540b6d7a01f94b06b51f27fd6b0307358f0c7/diff:/var/lib/docker/overlay2/a5a77daab231d8d9f6bccde006a207ac55eba70f1221af6acf584668b6732875/diff:/var/lib/docker/overlay2/8d8735d77324b45253a6a19c95ccc69efbb75db0817acd436b005907edf2edcf/diff:/var/lib/docker/overlay2/a7baa651956578e18a5f1b4650eb08a3fde481426f62eca9488d43b89516af4a/diff:/var/lib/d
ocker/overlay2/bce892b3b410ea92f44fedfdc2ee2fa21cfd1fb09da0f3f710f4127436dee1da/diff:/var/lib/docker/overlay2/5fd9b1d93e98bad37f9fb94802b81ef99b54fe312c33006d1efe3e0a4d018218/diff:/var/lib/docker/overlay2/4fa01f36ea63b13ec54182dc384831ff6ba4af27e4e0af13a679984676a4444c/diff:/var/lib/docker/overlay2/63fcd873b6d3120225858a1625cd3b62111df43d3ee0a5fc67083b6912d73a0b/diff:/var/lib/docker/overlay2/2a89e5c9c4b59c0940b10344a4b9bcc69aa162cbdaff6b115404618622a39bf7/diff:/var/lib/docker/overlay2/f08c2886bdfdaf347184cfc06f22457c321676b0bed884791f82f2e3871b640d/diff:/var/lib/docker/overlay2/2f28445803213dc1a6a1b2c687d83ad65dbc018184c663d1f55aa1e8ba26c71c/diff:/var/lib/docker/overlay2/b380dc70af7cf929aaac54e718efbf169fc3994906ab4c15442ddcb1b9973044/diff:/var/lib/docker/overlay2/78fc6ffaa10b2fbce9cefb40ac36aad6ac1d9d90eb27a39dc3316a9c7925b6e9/diff:/var/lib/docker/overlay2/14ee7ddeeb1d52f6956390ca75ff1c67feb8f463a7590e4e021a61251ed42ace/diff:/var/lib/docker/overlay2/99b8cd45c95f310665f0002ff1e8a6932c40fe872e3daa332d0b6f0cc41
f09f7/diff:/var/lib/docker/overlay2/efc742edfe683b14be0e72910049a54bf7b14ac798aa52a5e0f2839e1192b382/diff:/var/lib/docker/overlay2/d038d2ed6aff52af29d17eeb4de8728511045dbe49430059212877f1ae82f24b/diff:/var/lib/docker/overlay2/413fdf0e0da33dff95cacfd58fb4d7eb00b56c1777905c5671426293e1236f21/diff:/var/lib/docker/overlay2/88c5007e3d3e219079cebf81af5c22026c5923305801eacb5affe25b84906e7f/diff:/var/lib/docker/overlay2/e989119af87381d107830638584e78f0bf616a31754948372e177ffcdfb821fb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a84c1c1faf655e022ea96b0cbfa5780f6b48eafe43cd340b60f563833149b80f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a84c1c1faf655e022ea96b0cbfa5780f6b48eafe43cd340b60f563833149b80f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a84c1c1faf655e022ea96b0cbfa5780f6b48eafe43cd340b60f563833149b80f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-20210813032940-2022292",
	                "Source": "/var/lib/docker/volumes/addons-20210813032940-2022292/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-20210813032940-2022292",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-20210813032940-2022292",
	                "name.minikube.sigs.k8s.io": "addons-20210813032940-2022292",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "63cc91236c7d0216218ed6a99d16bf5a5214d1f2a29fe790b354ed1c3d95269a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50803"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50802"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50799"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50801"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50800"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/63cc91236c7d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-20210813032940-2022292": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5eb115611cc3",
	                        "addons-20210813032940-2022292"
	                    ],
	                    "NetworkID": "1437cc990d89cd4c2f4b86b77c1e915486671cda7aa7c792c2322229d169e87c",
	                    "EndpointID": "96d417aa7e8c5077c1e5d843cea177ffb9c204a83528a1fa41771ba12d8e11cc",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-20210813032940-2022292 -n addons-20210813032940-2022292
helpers_test.go:245: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p addons-20210813032940-2022292 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p addons-20210813032940-2022292 logs -n 25: (1.223009086s)
helpers_test.go:253: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                  Args                  |                Profile                 |  User   | Version |          Start Time           |           End Time            |
	|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | --all                                  | download-only-20210813032822-2022292   | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:29:26 UTC | Fri, 13 Aug 2021 03:29:26 UTC |
	| delete  | -p                                     | download-only-20210813032822-2022292   | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:29:26 UTC | Fri, 13 Aug 2021 03:29:26 UTC |
	|         | download-only-20210813032822-2022292   |                                        |         |         |                               |                               |
	| delete  | -p                                     | download-only-20210813032822-2022292   | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:29:26 UTC | Fri, 13 Aug 2021 03:29:26 UTC |
	|         | download-only-20210813032822-2022292   |                                        |         |         |                               |                               |
	| delete  | -p                                     | download-docker-20210813032926-2022292 | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:29:40 UTC | Fri, 13 Aug 2021 03:29:40 UTC |
	|         | download-docker-20210813032926-2022292 |                                        |         |         |                               |                               |
	| start   | -p                                     | addons-20210813032940-2022292          | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:29:40 UTC | Fri, 13 Aug 2021 03:37:01 UTC |
	|         | addons-20210813032940-2022292          |                                        |         |         |                               |                               |
	|         | --wait=true --memory=4000              |                                        |         |         |                               |                               |
	|         | --alsologtostderr                      |                                        |         |         |                               |                               |
	|         | --addons=registry                      |                                        |         |         |                               |                               |
	|         | --addons=metrics-server                |                                        |         |         |                               |                               |
	|         | --addons=olm                           |                                        |         |         |                               |                               |
	|         | --addons=volumesnapshots               |                                        |         |         |                               |                               |
	|         | --addons=csi-hostpath-driver           |                                        |         |         |                               |                               |
	|         | --driver=docker                        |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	|         | --addons=ingress                       |                                        |         |         |                               |                               |
	|         | --addons=gcp-auth                      |                                        |         |         |                               |                               |
	| -p      | addons-20210813032940-2022292          | addons-20210813032940-2022292          | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:39:08 UTC | Fri, 13 Aug 2021 03:39:08 UTC |
	|         | ip                                     |                                        |         |         |                               |                               |
	| -p      | addons-20210813032940-2022292          | addons-20210813032940-2022292          | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:41:44 UTC | Fri, 13 Aug 2021 03:41:44 UTC |
	|         | addons disable registry                |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1                 |                                        |         |         |                               |                               |
	| -p      | addons-20210813032940-2022292          | addons-20210813032940-2022292          | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:41:45 UTC | Fri, 13 Aug 2021 03:41:46 UTC |
	|         | logs -n 25                             |                                        |         |         |                               |                               |
	| -p      | addons-20210813032940-2022292          | addons-20210813032940-2022292          | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:41:57 UTC | Fri, 13 Aug 2021 03:42:24 UTC |
	|         | addons disable gcp-auth                |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1                 |                                        |         |         |                               |                               |
	| -p      | addons-20210813032940-2022292          | addons-20210813032940-2022292          | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:48:45 UTC | Fri, 13 Aug 2021 03:48:52 UTC |
	|         | addons disable                         |                                        |         |         |                               |                               |
	|         | csi-hostpath-driver                    |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1                 |                                        |         |         |                               |                               |
	| -p      | addons-20210813032940-2022292          | addons-20210813032940-2022292          | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:48:52 UTC | Fri, 13 Aug 2021 03:48:53 UTC |
	|         | addons disable volumesnapshots         |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1                 |                                        |         |         |                               |                               |
	| -p      | addons-20210813032940-2022292          | addons-20210813032940-2022292          | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:48:58 UTC | Fri, 13 Aug 2021 03:48:59 UTC |
	|         | addons disable metrics-server          |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1                 |                                        |         |         |                               |                               |
	| -p      | addons-20210813032940-2022292          | addons-20210813032940-2022292          | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:49:12 UTC | Fri, 13 Aug 2021 03:49:13 UTC |
	|         | logs -n 25                             |                                        |         |         |                               |                               |
	|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 03:29:40
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.16.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 03:29:40.904577 2022781 out.go:298] Setting OutFile to fd 1 ...
	I0813 03:29:40.904648 2022781 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 03:29:40.904652 2022781 out.go:311] Setting ErrFile to fd 2...
	I0813 03:29:40.904656 2022781 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 03:29:40.904776 2022781 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0813 03:29:40.905029 2022781 out.go:305] Setting JSON to false
	I0813 03:29:40.905896 2022781 start.go:111] hostinfo: {"hostname":"ip-172-31-30-239","uptime":47525,"bootTime":1628777856,"procs":373,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0813 03:29:40.905961 2022781 start.go:121] virtualization:  
	I0813 03:29:40.908162 2022781 out.go:177] * [addons-20210813032940-2022292] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0813 03:29:40.910717 2022781 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 03:29:40.909322 2022781 notify.go:169] Checking for updates...
	I0813 03:29:40.912282 2022781 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 03:29:40.913989 2022781 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0813 03:29:40.915709 2022781 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0813 03:29:40.915862 2022781 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 03:29:40.950762 2022781 docker.go:132] docker version: linux-20.10.8
	I0813 03:29:40.950850 2022781 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 03:29:41.048943 2022781 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:19 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-13 03:29:40.991652348 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0813 03:29:41.049041 2022781 docker.go:244] overlay module found
	I0813 03:29:41.051223 2022781 out.go:177] * Using the docker driver based on user configuration
	I0813 03:29:41.051242 2022781 start.go:278] selected driver: docker
	I0813 03:29:41.051247 2022781 start.go:751] validating driver "docker" against <nil>
	I0813 03:29:41.051260 2022781 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 03:29:41.051298 2022781 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 03:29:41.051322 2022781 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 03:29:41.053106 2022781 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 03:29:41.053411 2022781 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 03:29:41.128846 2022781 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:19 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-13 03:29:41.078382939 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0813 03:29:41.128961 2022781 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 03:29:41.129117 2022781 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 03:29:41.129138 2022781 cni.go:93] Creating CNI manager for ""
	I0813 03:29:41.129145 2022781 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 03:29:41.129158 2022781 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0813 03:29:41.129163 2022781 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0813 03:29:41.129174 2022781 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0813 03:29:41.129183 2022781 start_flags.go:277] config:
	{Name:addons-20210813032940-2022292 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210813032940-2022292 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 03:29:41.131398 2022781 out.go:177] * Starting control plane node addons-20210813032940-2022292 in cluster addons-20210813032940-2022292
	I0813 03:29:41.131428 2022781 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0813 03:29:41.133269 2022781 out.go:177] * Pulling base image ...
	I0813 03:29:41.133290 2022781 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 03:29:41.133320 2022781 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4
	I0813 03:29:41.133338 2022781 cache.go:56] Caching tarball of preloaded images
	I0813 03:29:41.133463 2022781 preload.go:173] Found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0813 03:29:41.133484 2022781 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0813 03:29:41.133759 2022781 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/config.json ...
	I0813 03:29:41.133785 2022781 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/config.json: {Name:mk0d1eb11345f673782e67cee6dd1983fc2ade38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 03:29:41.133935 2022781 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0813 03:29:41.165619 2022781 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0813 03:29:41.165641 2022781 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0813 03:29:41.165654 2022781 cache.go:205] Successfully downloaded all kic artifacts
	I0813 03:29:41.165678 2022781 start.go:313] acquiring machines lock for addons-20210813032940-2022292: {Name:mk4b9c97c204520a15a5934e9d971902370f4475 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 03:29:41.165798 2022781 start.go:317] acquired machines lock for "addons-20210813032940-2022292" in 99.224µs
	I0813 03:29:41.165826 2022781 start.go:89] Provisioning new machine with config: &{Name:addons-20210813032940-2022292 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210813032940-2022292 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 03:29:41.165896 2022781 start.go:126] createHost starting for "" (driver="docker")
	I0813 03:29:41.168439 2022781 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0813 03:29:41.168667 2022781 start.go:160] libmachine.API.Create for "addons-20210813032940-2022292" (driver="docker")
	I0813 03:29:41.168697 2022781 client.go:168] LocalClient.Create starting
	I0813 03:29:41.168779 2022781 main.go:130] libmachine: Creating CA: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem
	I0813 03:29:41.457503 2022781 main.go:130] libmachine: Creating client certificate: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem
	I0813 03:29:42.069244 2022781 cli_runner.go:115] Run: docker network inspect addons-20210813032940-2022292 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0813 03:29:42.096969 2022781 cli_runner.go:162] docker network inspect addons-20210813032940-2022292 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0813 03:29:42.097037 2022781 network_create.go:255] running [docker network inspect addons-20210813032940-2022292] to gather additional debugging logs...
	I0813 03:29:42.097062 2022781 cli_runner.go:115] Run: docker network inspect addons-20210813032940-2022292
	W0813 03:29:42.123327 2022781 cli_runner.go:162] docker network inspect addons-20210813032940-2022292 returned with exit code 1
	I0813 03:29:42.123351 2022781 network_create.go:258] error running [docker network inspect addons-20210813032940-2022292]: docker network inspect addons-20210813032940-2022292: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: addons-20210813032940-2022292
	I0813 03:29:42.123365 2022781 network_create.go:260] output of [docker network inspect addons-20210813032940-2022292]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: addons-20210813032940-2022292
	
	** /stderr **
	I0813 03:29:42.123423 2022781 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 03:29:42.150055 2022781 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0x4000892220] misses:0}
	I0813 03:29:42.150105 2022781 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 03:29:42.150124 2022781 network_create.go:106] attempt to create docker network addons-20210813032940-2022292 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0813 03:29:42.150170 2022781 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20210813032940-2022292
	I0813 03:29:42.365897 2022781 network_create.go:90] docker network addons-20210813032940-2022292 192.168.49.0/24 created
	I0813 03:29:42.365924 2022781 kic.go:106] calculated static IP "192.168.49.2" for the "addons-20210813032940-2022292" container
	I0813 03:29:42.365989 2022781 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0813 03:29:42.392753 2022781 cli_runner.go:115] Run: docker volume create addons-20210813032940-2022292 --label name.minikube.sigs.k8s.io=addons-20210813032940-2022292 --label created_by.minikube.sigs.k8s.io=true
	I0813 03:29:42.465525 2022781 oci.go:102] Successfully created a docker volume addons-20210813032940-2022292
	I0813 03:29:42.465589 2022781 cli_runner.go:115] Run: docker run --rm --name addons-20210813032940-2022292-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210813032940-2022292 --entrypoint /usr/bin/test -v addons-20210813032940-2022292:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
	I0813 03:29:46.145957 2022781 cli_runner.go:168] Completed: docker run --rm --name addons-20210813032940-2022292-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210813032940-2022292 --entrypoint /usr/bin/test -v addons-20210813032940-2022292:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib: (3.680326113s)
	I0813 03:29:46.145978 2022781 oci.go:106] Successfully prepared a docker volume addons-20210813032940-2022292
	W0813 03:29:46.146006 2022781 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0813 03:29:46.146013 2022781 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0813 03:29:46.146068 2022781 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0813 03:29:46.146285 2022781 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 03:29:46.146304 2022781 kic.go:179] Starting extracting preloaded images to volume ...
	I0813 03:29:46.146345 2022781 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-20210813032940-2022292:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0813 03:29:46.276252 2022781 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-20210813032940-2022292 --name addons-20210813032940-2022292 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210813032940-2022292 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-20210813032940-2022292 --network addons-20210813032940-2022292 --ip 192.168.49.2 --volume addons-20210813032940-2022292:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0813 03:29:46.777489 2022781 cli_runner.go:115] Run: docker container inspect addons-20210813032940-2022292 --format={{.State.Running}}
	I0813 03:29:46.831248 2022781 cli_runner.go:115] Run: docker container inspect addons-20210813032940-2022292 --format={{.State.Status}}
	I0813 03:29:46.876305 2022781 cli_runner.go:115] Run: docker exec addons-20210813032940-2022292 stat /var/lib/dpkg/alternatives/iptables
	I0813 03:29:46.966276 2022781 oci.go:278] the created container "addons-20210813032940-2022292" has a running status.
	I0813 03:29:46.966302 2022781 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa...
	I0813 03:29:48.086545 2022781 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0813 03:30:00.277285 2022781 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-20210813032940-2022292:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: (14.130901108s)
	I0813 03:30:00.277310 2022781 kic.go:188] duration metric: took 14.131004 seconds to extract preloaded images to volume
	I0813 03:30:00.350109 2022781 cli_runner.go:115] Run: docker container inspect addons-20210813032940-2022292 --format={{.State.Status}}
	I0813 03:30:00.387273 2022781 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0813 03:30:00.387290 2022781 kic_runner.go:115] Args: [docker exec --privileged addons-20210813032940-2022292 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0813 03:30:00.480821 2022781 cli_runner.go:115] Run: docker container inspect addons-20210813032940-2022292 --format={{.State.Status}}
	I0813 03:30:00.523697 2022781 machine.go:88] provisioning docker machine ...
	I0813 03:30:00.523726 2022781 ubuntu.go:169] provisioning hostname "addons-20210813032940-2022292"
	I0813 03:30:00.523781 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:00.558122 2022781 main.go:130] libmachine: Using SSH client type: native
	I0813 03:30:00.558295 2022781 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50803 <nil> <nil>}
	I0813 03:30:00.558308 2022781 main.go:130] libmachine: About to run SSH command:
	sudo hostname addons-20210813032940-2022292 && echo "addons-20210813032940-2022292" | sudo tee /etc/hostname
	I0813 03:30:00.689627 2022781 main.go:130] libmachine: SSH cmd err, output: <nil>: addons-20210813032940-2022292
	
	I0813 03:30:00.689693 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:00.721994 2022781 main.go:130] libmachine: Using SSH client type: native
	I0813 03:30:00.722165 2022781 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50803 <nil> <nil>}
	I0813 03:30:00.722192 2022781 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-20210813032940-2022292' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-20210813032940-2022292/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-20210813032940-2022292' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 03:30:00.836190 2022781 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 03:30:00.836215 2022781 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e6
89d34b/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube}
	I0813 03:30:00.836235 2022781 ubuntu.go:177] setting up certificates
	I0813 03:30:00.836244 2022781 provision.go:83] configureAuth start
	I0813 03:30:00.836296 2022781 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210813032940-2022292
	I0813 03:30:00.866297 2022781 provision.go:137] copyHostCerts
	I0813 03:30:00.866361 2022781 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem (1078 bytes)
	I0813 03:30:00.866440 2022781 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem (1123 bytes)
	I0813 03:30:00.866493 2022781 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem (1679 bytes)
	I0813 03:30:00.866533 2022781 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem org=jenkins.addons-20210813032940-2022292 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-20210813032940-2022292]
	I0813 03:30:01.389006 2022781 provision.go:171] copyRemoteCerts
	I0813 03:30:01.389079 2022781 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 03:30:01.389121 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:01.419000 2022781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50803 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa Username:docker}
	I0813 03:30:01.502519 2022781 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 03:30:01.520038 2022781 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
	I0813 03:30:01.534523 2022781 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0813 03:30:01.548773 2022781 provision.go:86] duration metric: configureAuth took 712.517206ms
	I0813 03:30:01.548788 2022781 ubuntu.go:193] setting minikube options for container-runtime
	I0813 03:30:01.548937 2022781 machine.go:91] provisioned docker machine in 1.0252225s
	I0813 03:30:01.548943 2022781 client.go:171] LocalClient.Create took 20.380236744s
	I0813 03:30:01.548963 2022781 start.go:168] duration metric: libmachine.API.Create for "addons-20210813032940-2022292" took 20.380294582s
	I0813 03:30:01.548971 2022781 start.go:267] post-start starting for "addons-20210813032940-2022292" (driver="docker")
	I0813 03:30:01.548975 2022781 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 03:30:01.549015 2022781 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 03:30:01.549053 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:01.580251 2022781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50803 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa Username:docker}
	I0813 03:30:01.662251 2022781 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 03:30:01.664643 2022781 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 03:30:01.664666 2022781 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 03:30:01.664677 2022781 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 03:30:01.664684 2022781 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 03:30:01.664694 2022781 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/addons for local assets ...
	I0813 03:30:01.664745 2022781 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files for local assets ...
	I0813 03:30:01.664771 2022781 start.go:270] post-start completed in 115.793872ms
	I0813 03:30:01.665039 2022781 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210813032940-2022292
	I0813 03:30:01.693800 2022781 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/config.json ...
	I0813 03:30:01.694005 2022781 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 03:30:01.694053 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:01.721552 2022781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50803 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa Username:docker}
	I0813 03:30:01.801607 2022781 start.go:129] duration metric: createHost completed in 20.635699035s
	I0813 03:30:01.801629 2022781 start.go:80] releasing machines lock for "addons-20210813032940-2022292", held for 20.635816952s
	I0813 03:30:01.801697 2022781 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210813032940-2022292
	I0813 03:30:01.830486 2022781 ssh_runner.go:149] Run: systemctl --version
	I0813 03:30:01.830532 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:01.830558 2022781 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 03:30:01.830610 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:01.866554 2022781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50803 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa Username:docker}
	I0813 03:30:01.870429 2022781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50803 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa Username:docker}
	I0813 03:30:01.952247 2022781 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0813 03:30:02.115786 2022781 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0813 03:30:02.123997 2022781 docker.go:153] disabling docker service ...
	I0813 03:30:02.124041 2022781 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 03:30:02.145698 2022781 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 03:30:02.154128 2022781 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 03:30:02.230172 2022781 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 03:30:02.309742 2022781 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 03:30:02.317886 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 03:30:02.328545 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0813 03:30:02.342323 2022781 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 03:30:02.348596 2022781 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 03:30:02.353961 2022781 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 03:30:02.428966 2022781 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 03:30:02.561206 2022781 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0813 03:30:02.561311 2022781 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 03:30:02.564783 2022781 start.go:417] Will wait 60s for crictl version
	I0813 03:30:02.564854 2022781 ssh_runner.go:149] Run: sudo crictl version
	I0813 03:30:02.628354 2022781 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-13T03:30:02Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0813 03:30:13.675200 2022781 ssh_runner.go:149] Run: sudo crictl version
	I0813 03:30:13.709613 2022781 start.go:426] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.6
	RuntimeApiVersion:  v1alpha2
	I0813 03:30:13.709705 2022781 ssh_runner.go:149] Run: containerd --version
	I0813 03:30:13.733653 2022781 ssh_runner.go:149] Run: containerd --version
	I0813 03:30:13.757722 2022781 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.6 ...
	I0813 03:30:13.757794 2022781 cli_runner.go:115] Run: docker network inspect addons-20210813032940-2022292 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 03:30:13.786942 2022781 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0813 03:30:13.789852 2022781 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 03:30:13.798329 2022781 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 03:30:13.798392 2022781 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 03:30:13.822356 2022781 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 03:30:13.822377 2022781 containerd.go:517] Images already preloaded, skipping extraction
	I0813 03:30:13.822420 2022781 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 03:30:13.844103 2022781 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 03:30:13.844124 2022781 cache_images.go:74] Images are preloaded, skipping loading
	I0813 03:30:13.844175 2022781 ssh_runner.go:149] Run: sudo crictl info
	I0813 03:30:13.867458 2022781 cni.go:93] Creating CNI manager for ""
	I0813 03:30:13.867480 2022781 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 03:30:13.867493 2022781 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 03:30:13.867528 2022781 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-20210813032940-2022292 NodeName:addons-20210813032940-2022292 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFil
e:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 03:30:13.867709 2022781 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "addons-20210813032940-2022292"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 03:30:13.867799 2022781 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-20210813032940-2022292 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:addons-20210813032940-2022292 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 03:30:13.867861 2022781 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 03:30:13.874164 2022781 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 03:30:13.874220 2022781 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 03:30:13.880082 2022781 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (574 bytes)
	I0813 03:30:13.891242 2022781 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 03:30:13.902573 2022781 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2079 bytes)
	I0813 03:30:13.913737 2022781 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0813 03:30:13.916383 2022781 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 03:30:13.924200 2022781 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292 for IP: 192.168.49.2
	I0813 03:30:13.924238 2022781 certs.go:183] generating minikubeCA CA: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key
	I0813 03:30:14.303335 2022781 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt ...
	I0813 03:30:14.303366 2022781 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt: {Name:mk3901a19599d51a2d50c48585ff3f7192ba4433 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 03:30:14.303553 2022781 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key ...
	I0813 03:30:14.303570 2022781 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key: {Name:mk845cb200e03c80833445af29652075ca29c5ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 03:30:14.303661 2022781 certs.go:183] generating proxyClientCA CA: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key
	I0813 03:30:14.625439 2022781 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.crt ...
	I0813 03:30:14.625463 2022781 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.crt: {Name:mk50086ce36a18e239ef358ebe31b06ec58a54a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 03:30:14.625614 2022781 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key ...
	I0813 03:30:14.625629 2022781 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key: {Name:mkcd9f75f5685763d3008dae66cb562ca8ff349f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 03:30:14.625754 2022781 certs.go:294] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.key
	I0813 03:30:14.625769 2022781 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.crt with IP's: []
	I0813 03:30:14.981494 2022781 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.crt ...
	I0813 03:30:14.981520 2022781 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.crt: {Name:mk67389ffe06e3642f68dcb5d06f25c4a4286db0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 03:30:14.981694 2022781 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.key ...
	I0813 03:30:14.981709 2022781 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.key: {Name:mk98a53e6092aad61eaf9907276fc969c6b86e98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 03:30:14.981803 2022781 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/apiserver.key.dd3b5fb2
	I0813 03:30:14.981815 2022781 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 03:30:15.445439 2022781 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/apiserver.crt.dd3b5fb2 ...
	I0813 03:30:15.445467 2022781 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/apiserver.crt.dd3b5fb2: {Name:mk68008aff00f28fd78f3516c58a44d15f90967b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 03:30:15.445636 2022781 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/apiserver.key.dd3b5fb2 ...
	I0813 03:30:15.445650 2022781 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/apiserver.key.dd3b5fb2: {Name:mk75a5de72872e71c9f625f9410c2e8267bb030b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 03:30:15.445738 2022781 certs.go:305] copying /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/apiserver.crt
	I0813 03:30:15.445794 2022781 certs.go:309] copying /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/apiserver.key
	I0813 03:30:15.445841 2022781 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/proxy-client.key
	I0813 03:30:15.445852 2022781 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/proxy-client.crt with IP's: []
	I0813 03:30:16.134694 2022781 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/proxy-client.crt ...
	I0813 03:30:16.134726 2022781 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/proxy-client.crt: {Name:mkc9f3f094f59bf4cae95593974525020ed0791c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 03:30:16.134902 2022781 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/proxy-client.key ...
	I0813 03:30:16.134917 2022781 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/proxy-client.key: {Name:mk3f97104a527dd489a07fc16ea52fabc4e3c427 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 03:30:16.135088 2022781 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem (1675 bytes)
	I0813 03:30:16.135130 2022781 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem (1078 bytes)
	I0813 03:30:16.135160 2022781 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem (1123 bytes)
	I0813 03:30:16.135186 2022781 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem (1679 bytes)
	I0813 03:30:16.137695 2022781 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 03:30:16.153617 2022781 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 03:30:16.168774 2022781 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 03:30:16.183608 2022781 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0813 03:30:16.198875 2022781 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 03:30:16.214461 2022781 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 03:30:16.229908 2022781 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 03:30:16.245519 2022781 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0813 03:30:16.261024 2022781 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 03:30:16.276150 2022781 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 03:30:16.287817 2022781 ssh_runner.go:149] Run: openssl version
	I0813 03:30:16.292417 2022781 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 03:30:16.298969 2022781 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 03:30:16.301737 2022781 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 13 03:30 /usr/share/ca-certificates/minikubeCA.pem
	I0813 03:30:16.301792 2022781 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 03:30:16.306354 2022781 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 03:30:16.312798 2022781 kubeadm.go:390] StartCluster: {Name:addons-20210813032940-2022292 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210813032940-2022292 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 03:30:16.312962 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0813 03:30:16.313019 2022781 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 03:30:16.341509 2022781 cri.go:76] found id: ""
	I0813 03:30:16.341598 2022781 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 03:30:16.348137 2022781 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 03:30:16.354334 2022781 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0813 03:30:16.354389 2022781 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 03:30:16.360245 2022781 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 03:30:16.360292 2022781 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0813 03:30:16.991998 2022781 out.go:204]   - Generating certificates and keys ...
	I0813 03:30:22.528739 2022781 out.go:204]   - Booting up control plane ...
	I0813 03:30:42.096737 2022781 out.go:204]   - Configuring RBAC rules ...
	I0813 03:30:42.513534 2022781 cni.go:93] Creating CNI manager for ""
	I0813 03:30:42.513560 2022781 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 03:30:42.515615 2022781 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 03:30:42.515681 2022781 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 03:30:42.519188 2022781 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0813 03:30:42.519210 2022781 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0813 03:30:42.531743 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 03:30:43.275208 2022781 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 03:30:43.275325 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:43.275388 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=dc1c3ca26e9449ce488a773126b8450402c94a19 minikube.k8s.io/name=addons-20210813032940-2022292 minikube.k8s.io/updated_at=2021_08_13T03_30_43_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:43.426738 2022781 ops.go:34] apiserver oom_adj: -16
	I0813 03:30:43.426841 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:44.011413 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:44.511783 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:45.011599 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:45.510878 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:46.011296 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:46.511321 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:47.010930 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:47.510919 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:48.011476 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:48.511258 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:49.010873 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:49.511635 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:50.010907 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:50.511782 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:51.011260 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:51.511532 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:52.011061 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:52.510863 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:53.010893 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:53.511752 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:54.011653 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:54.511235 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:55.011781 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:55.511793 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:56.011692 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:56.511006 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:56.652188 2022781 kubeadm.go:985] duration metric: took 13.376902139s to wait for elevateKubeSystemPrivileges.
	I0813 03:30:56.652210 2022781 kubeadm.go:392] StartCluster complete in 40.339416945s
	I0813 03:30:56.652225 2022781 settings.go:142] acquiring lock: {Name:mke0b9bf6059169e73bfde24fe8e8162c3ec0654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 03:30:56.652354 2022781 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 03:30:56.652762 2022781 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig: {Name:mk6797826f33680e9cda7cd38a7adfcabda9681c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 03:30:57.192592 2022781 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "addons-20210813032940-2022292" rescaled to 1
	I0813 03:30:57.192649 2022781 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 03:30:57.194497 2022781 out.go:177] * Verifying Kubernetes components...
	I0813 03:30:57.194581 2022781 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 03:30:57.192709 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 03:30:57.192937 2022781 addons.go:342] enableAddons start: toEnable=map[], additional=[registry metrics-server olm volumesnapshots csi-hostpath-driver ingress gcp-auth]
	I0813 03:30:57.194766 2022781 addons.go:59] Setting volumesnapshots=true in profile "addons-20210813032940-2022292"
	I0813 03:30:57.194794 2022781 addons.go:135] Setting addon volumesnapshots=true in "addons-20210813032940-2022292"
	I0813 03:30:57.194829 2022781 host.go:66] Checking if "addons-20210813032940-2022292" exists ...
	I0813 03:30:57.195355 2022781 cli_runner.go:115] Run: docker container inspect addons-20210813032940-2022292 --format={{.State.Status}}
	I0813 03:30:57.195502 2022781 addons.go:59] Setting ingress=true in profile "addons-20210813032940-2022292"
	I0813 03:30:57.195519 2022781 addons.go:135] Setting addon ingress=true in "addons-20210813032940-2022292"
	I0813 03:30:57.195539 2022781 host.go:66] Checking if "addons-20210813032940-2022292" exists ...
	I0813 03:30:57.195954 2022781 cli_runner.go:115] Run: docker container inspect addons-20210813032940-2022292 --format={{.State.Status}}
	I0813 03:30:57.196016 2022781 addons.go:59] Setting csi-hostpath-driver=true in profile "addons-20210813032940-2022292"
	I0813 03:30:57.196040 2022781 addons.go:135] Setting addon csi-hostpath-driver=true in "addons-20210813032940-2022292"
	I0813 03:30:57.196063 2022781 host.go:66] Checking if "addons-20210813032940-2022292" exists ...
	I0813 03:30:57.196472 2022781 cli_runner.go:115] Run: docker container inspect addons-20210813032940-2022292 --format={{.State.Status}}
	I0813 03:30:57.196530 2022781 addons.go:59] Setting default-storageclass=true in profile "addons-20210813032940-2022292"
	I0813 03:30:57.196541 2022781 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-20210813032940-2022292"
	I0813 03:30:57.196744 2022781 cli_runner.go:115] Run: docker container inspect addons-20210813032940-2022292 --format={{.State.Status}}
	I0813 03:30:57.196803 2022781 addons.go:59] Setting gcp-auth=true in profile "addons-20210813032940-2022292"
	I0813 03:30:57.196814 2022781 mustload.go:65] Loading cluster: addons-20210813032940-2022292
	I0813 03:30:57.197132 2022781 cli_runner.go:115] Run: docker container inspect addons-20210813032940-2022292 --format={{.State.Status}}
	I0813 03:30:57.197184 2022781 addons.go:59] Setting olm=true in profile "addons-20210813032940-2022292"
	I0813 03:30:57.197194 2022781 addons.go:135] Setting addon olm=true in "addons-20210813032940-2022292"
	I0813 03:30:57.197212 2022781 host.go:66] Checking if "addons-20210813032940-2022292" exists ...
	I0813 03:30:57.200564 2022781 cli_runner.go:115] Run: docker container inspect addons-20210813032940-2022292 --format={{.State.Status}}
	I0813 03:30:57.210302 2022781 addons.go:59] Setting metrics-server=true in profile "addons-20210813032940-2022292"
	I0813 03:30:57.210328 2022781 addons.go:135] Setting addon metrics-server=true in "addons-20210813032940-2022292"
	I0813 03:30:57.210364 2022781 host.go:66] Checking if "addons-20210813032940-2022292" exists ...
	I0813 03:30:57.210821 2022781 cli_runner.go:115] Run: docker container inspect addons-20210813032940-2022292 --format={{.State.Status}}
	I0813 03:30:57.210931 2022781 addons.go:59] Setting registry=true in profile "addons-20210813032940-2022292"
	I0813 03:30:57.210942 2022781 addons.go:135] Setting addon registry=true in "addons-20210813032940-2022292"
	I0813 03:30:57.210969 2022781 host.go:66] Checking if "addons-20210813032940-2022292" exists ...
	I0813 03:30:57.211436 2022781 cli_runner.go:115] Run: docker container inspect addons-20210813032940-2022292 --format={{.State.Status}}
	I0813 03:30:57.211498 2022781 addons.go:59] Setting storage-provisioner=true in profile "addons-20210813032940-2022292"
	I0813 03:30:57.211507 2022781 addons.go:135] Setting addon storage-provisioner=true in "addons-20210813032940-2022292"
	W0813 03:30:57.211512 2022781 addons.go:147] addon storage-provisioner should already be in state true
	I0813 03:30:57.211528 2022781 host.go:66] Checking if "addons-20210813032940-2022292" exists ...
	I0813 03:30:57.211909 2022781 cli_runner.go:115] Run: docker container inspect addons-20210813032940-2022292 --format={{.State.Status}}
	I0813 03:30:57.359893 2022781 out.go:177]   - Using image k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0
	I0813 03:30:57.359970 2022781 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0813 03:30:57.359983 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0813 03:30:57.360041 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:57.397371 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 03:30:57.398431 2022781 node_ready.go:35] waiting up to 6m0s for node "addons-20210813032940-2022292" to be "Ready" ...
	I0813 03:30:57.457671 2022781 out.go:177]   - Using image quay.io/operator-framework/olm:v0.17.0
	I0813 03:30:57.464684 2022781 out.go:177]   - Using image quay.io/operator-framework/upstream-community-operators:07bbc13
	I0813 03:30:57.571352 2022781 out.go:177]   - Using image k8s.gcr.io/metrics-server/metrics-server:v0.4.2
	I0813 03:30:57.571410 2022781 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 03:30:57.571426 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 03:30:57.571484 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:57.621898 2022781 out.go:177]   - Using image gcr.io/google_containers/kube-registry-proxy:0.4
	I0813 03:30:57.628181 2022781 out.go:177]   - Using image registry:2.7.1
	I0813 03:30:57.628300 2022781 addons.go:275] installing /etc/kubernetes/addons/registry-rc.yaml
	I0813 03:30:57.628309 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (788 bytes)
	I0813 03:30:57.628386 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:57.642700 2022781 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-attacher:v3.1.0
	I0813 03:30:57.645886 2022781 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-controller:v0.2.0
	I0813 03:30:57.671912 2022781 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
	I0813 03:30:57.675421 2022781 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0813 03:30:57.677167 2022781 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0813 03:30:57.677235 2022781 addons.go:275] installing /etc/kubernetes/addons/ingress-configmap.yaml
	I0813 03:30:57.677244 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-configmap.yaml (1865 bytes)
	I0813 03:30:57.677305 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:57.724233 2022781 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
	I0813 03:30:57.727436 2022781 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-agent:v0.2.0
	I0813 03:30:57.735580 2022781 out.go:177]   - Using image k8s.gcr.io/sig-storage/hostpathplugin:v1.6.0
	I0813 03:30:57.742927 2022781 out.go:177]   - Using image k8s.gcr.io/sig-storage/livenessprobe:v2.2.0
	I0813 03:30:57.739683 2022781 addons.go:135] Setting addon default-storageclass=true in "addons-20210813032940-2022292"
	I0813 03:30:57.739724 2022781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50803 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa Username:docker}
	I0813 03:30:57.739755 2022781 host.go:66] Checking if "addons-20210813032940-2022292" exists ...
	I0813 03:30:57.724752 2022781 addons.go:275] installing /etc/kubernetes/addons/crds.yaml
	I0813 03:30:57.744017 2022781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50803 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa Username:docker}
	W0813 03:30:57.745611 2022781 addons.go:147] addon default-storageclass should already be in state true
	I0813 03:30:57.745618 2022781 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
	I0813 03:30:57.760402 2022781 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
	I0813 03:30:57.756368 2022781 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 03:30:57.756381 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/crds.yaml (825331 bytes)
	I0813 03:30:57.756761 2022781 host.go:66] Checking if "addons-20210813032940-2022292" exists ...
	I0813 03:30:57.765577 2022781 cli_runner.go:115] Run: docker container inspect addons-20210813032940-2022292 --format={{.State.Status}}
	I0813 03:30:57.765794 2022781 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 03:30:57.765818 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 03:30:57.765883 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:57.765965 2022781 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-resizer:v1.1.0
	I0813 03:30:57.766037 2022781 addons.go:275] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0813 03:30:57.766058 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0813 03:30:57.766113 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:57.766218 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:57.818821 2022781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50803 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa Username:docker}
	I0813 03:30:57.852135 2022781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50803 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa Username:docker}
	I0813 03:30:57.889195 2022781 ssh_runner.go:316] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0813 03:30:57.889281 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:57.970915 2022781 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 03:30:57.970933 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 03:30:57.970985 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:57.987520 2022781 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 03:30:57.987538 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1931 bytes)
	I0813 03:30:58.038677 2022781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50803 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa Username:docker}
	I0813 03:30:58.056460 2022781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50803 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa Username:docker}
	I0813 03:30:58.071053 2022781 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0813 03:30:58.071076 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0813 03:30:58.078413 2022781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50803 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa Username:docker}
	I0813 03:30:58.124468 2022781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50803 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa Username:docker}
	I0813 03:30:58.130893 2022781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50803 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa Username:docker}
	I0813 03:30:58.143390 2022781 addons.go:275] installing /etc/kubernetes/addons/registry-svc.yaml
	I0813 03:30:58.143407 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0813 03:30:58.145018 2022781 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 03:30:58.145035 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 03:30:58.183598 2022781 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0813 03:30:58.183619 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0813 03:30:58.215426 2022781 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 03:30:58.215446 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 03:30:58.221069 2022781 addons.go:275] installing /etc/kubernetes/addons/ingress-rbac.yaml
	I0813 03:30:58.221112 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-rbac.yaml (6005 bytes)
	I0813 03:30:58.238980 2022781 addons.go:275] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0813 03:30:58.239028 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (950 bytes)
	I0813 03:30:58.243449 2022781 addons.go:275] installing /etc/kubernetes/addons/ingress-dp.yaml
	I0813 03:30:58.243489 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-dp.yaml (9394 bytes)
	I0813 03:30:58.304607 2022781 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml
	I0813 03:30:58.310261 2022781 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0813 03:30:58.348483 2022781 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml
	I0813 03:30:58.348534 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml (2203 bytes)
	I0813 03:30:58.368606 2022781 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0813 03:30:58.368664 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19584 bytes)
	I0813 03:30:58.373631 2022781 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 03:30:58.402890 2022781 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0813 03:30:58.402938 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3037 bytes)
	I0813 03:30:58.441222 2022781 addons.go:275] installing /etc/kubernetes/addons/olm.yaml
	I0813 03:30:58.441281 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/olm.yaml (9882 bytes)
	I0813 03:30:58.446337 2022781 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 03:30:58.505463 2022781 addons.go:275] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0813 03:30:58.505525 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3428 bytes)
	I0813 03:30:58.556304 2022781 ssh_runner.go:316] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0813 03:30:58.577016 2022781 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.179580348s)
	I0813 03:30:58.577078 2022781 start.go:736] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0813 03:30:58.578408 2022781 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 03:30:58.608000 2022781 addons.go:275] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0813 03:30:58.608059 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (3666 bytes)
	I0813 03:30:58.663310 2022781 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0813 03:30:58.753249 2022781 addons.go:275] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0813 03:30:58.753272 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1071 bytes)
	I0813 03:30:58.754118 2022781 addons.go:275] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0813 03:30:58.754135 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2944 bytes)
	I0813 03:30:58.758582 2022781 addons.go:135] Setting addon gcp-auth=true in "addons-20210813032940-2022292"
	I0813 03:30:58.758626 2022781 host.go:66] Checking if "addons-20210813032940-2022292" exists ...
	I0813 03:30:58.759106 2022781 cli_runner.go:115] Run: docker container inspect addons-20210813032940-2022292 --format={{.State.Status}}
	I0813 03:30:58.821220 2022781 out.go:177]   - Using image jettech/kube-webhook-certgen:v1.3.0
	I0813 03:30:58.823073 2022781 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.0.6
	I0813 03:30:58.823123 2022781 addons.go:275] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0813 03:30:58.823138 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0813 03:30:58.823192 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:58.886665 2022781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50803 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa Username:docker}
	I0813 03:30:58.922146 2022781 addons.go:275] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0813 03:30:58.922166 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3194 bytes)
	I0813 03:30:58.946902 2022781 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0813 03:30:58.975186 2022781 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0813 03:30:58.975210 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2421 bytes)
	I0813 03:30:58.994077 2022781 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0813 03:30:58.994096 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1034 bytes)
	I0813 03:30:59.121853 2022781 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0813 03:30:59.121918 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (6710 bytes)
	I0813 03:30:59.311772 2022781 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-provisioner.yaml
	I0813 03:30:59.311791 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-provisioner.yaml (2555 bytes)
	I0813 03:30:59.407661 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:30:59.447040 2022781 addons.go:275] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0813 03:30:59.447104 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (770 bytes)
	I0813 03:30:59.497092 2022781 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0813 03:30:59.497156 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2469 bytes)
	I0813 03:30:59.560580 2022781 addons.go:275] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0813 03:30:59.560641 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (4755 bytes)
	I0813 03:30:59.624771 2022781 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml
	I0813 03:30:59.624832 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml (2555 bytes)
	I0813 03:30:59.714273 2022781 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0813 03:30:59.850024 2022781 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0813 03:30:59.850092 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0813 03:30:59.940480 2022781 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0813 03:31:01.011039 2022781 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (2.700714405s)
	I0813 03:31:01.011064 2022781 addons.go:313] Verifying addon registry=true in "addons-20210813032940-2022292"
	I0813 03:31:01.013288 2022781 out.go:177] * Verifying registry addon...
	I0813 03:31:01.014895 2022781 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0813 03:31:01.011401 2022781 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.637715134s)
	I0813 03:31:01.015041 2022781 addons.go:313] Verifying addon metrics-server=true in "addons-20210813032940-2022292"
	I0813 03:31:01.011418 2022781 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml: (2.70679067s)
	I0813 03:31:01.015053 2022781 addons.go:313] Verifying addon ingress=true in "addons-20210813032940-2022292"
	I0813 03:31:01.011558 2022781 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.565182254s)
	I0813 03:31:01.011669 2022781 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.433133243s)
	I0813 03:31:01.017502 2022781 out.go:177] * Verifying ingress addon...
	I0813 03:31:01.019575 2022781 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0813 03:31:01.054527 2022781 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0813 03:31:01.054568 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:01.075891 2022781 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0813 03:31:01.075946 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:01.441335 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:01.605451 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:01.606015 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:02.073277 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:02.097062 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:02.608493 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:02.684632 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:03.124082 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:03.130892 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:03.449076 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:03.559480 2022781 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.612543123s)
	W0813 03:31:03.559512 2022781 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0813 03:31:03.559537 2022781 retry.go:31] will retry after 360.127272ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0813 03:31:03.559557 2022781 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (4.896220344s)
	W0813 03:31:03.559572 2022781 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0813 03:31:03.559579 2022781 retry.go:31] will retry after 291.140013ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0813 03:31:03.559643 2022781 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (3.845306221s)
	I0813 03:31:03.559655 2022781 addons.go:313] Verifying addon gcp-auth=true in "addons-20210813032940-2022292"
	I0813 03:31:03.563647 2022781 out.go:177] * Verifying gcp-auth addon...
	I0813 03:31:03.565513 2022781 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0813 03:31:03.658489 2022781 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0813 03:31:03.658547 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:03.659098 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:03.679372 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:03.850862 2022781 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0813 03:31:03.920365 2022781 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0813 03:31:04.119619 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:04.140362 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:04.167248 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:04.576001 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:04.585046 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:04.682050 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:04.995932 2022781 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.05536482s)
	I0813 03:31:04.996001 2022781 addons.go:313] Verifying addon csi-hostpath-driver=true in "addons-20210813032940-2022292"
	I0813 03:31:04.999927 2022781 out.go:177] * Verifying csi-hostpath-driver addon...
	I0813 03:31:05.001725 2022781 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0813 03:31:05.051965 2022781 kapi.go:86] Found 5 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0813 03:31:05.052027 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:05.069278 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:05.101185 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:05.260498 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:05.490392 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:05.589788 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:05.591256 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:05.595113 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:05.661721 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:06.068823 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:06.075971 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:06.080721 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:06.092778 2022781 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (2.241844201s)
	I0813 03:31:06.092890 2022781 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.172455914s)
	I0813 03:31:06.166316 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:06.559342 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:06.560990 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:06.578867 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:06.661958 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:07.056869 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:07.059136 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:07.078935 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:07.162308 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:07.558175 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:07.558364 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:07.579127 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:07.661829 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:07.908271 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:08.057749 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:08.060739 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:08.088631 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:08.166573 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:08.557935 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:08.560943 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:08.586563 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:08.661083 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:09.071552 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:09.071943 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:09.078283 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:09.161747 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:09.558910 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:09.560846 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:09.579424 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:09.661747 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:10.058388 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:10.059019 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:10.079633 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:10.161876 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:10.407358 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:10.556948 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:10.558279 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:10.578788 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:10.661635 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:11.057295 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:11.059187 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:11.079035 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:11.161068 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:11.557584 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:11.558891 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:11.578602 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:11.661347 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:12.057052 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:12.058710 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:12.079397 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:12.161365 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:12.407663 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:12.557331 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:12.558836 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:12.579555 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:12.661269 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:13.058425 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:13.058819 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:13.079522 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:13.161225 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:13.557182 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:13.559122 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:13.578539 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:13.662249 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:14.056888 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:14.058979 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:14.079485 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:14.161653 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:14.557930 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:14.559013 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:14.578707 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:14.662203 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:14.908057 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:15.058555 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:15.058960 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:15.079677 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:15.161960 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:15.556797 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:15.558599 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:15.579189 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:15.665520 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:16.058280 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:16.059533 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:16.078991 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:16.161336 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:16.557900 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:16.559209 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:16.578911 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:16.662278 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:16.908218 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:17.058976 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:17.062797 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:17.078598 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:17.161084 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:17.556309 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:17.558779 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:17.579316 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:17.661159 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:18.056254 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:18.058097 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:18.078601 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:18.161428 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:18.557023 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:18.558366 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:18.578650 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:18.660860 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:19.056829 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:19.057601 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:19.079058 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:19.160810 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:19.406830 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:19.557137 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:19.570593 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:19.578919 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:19.661198 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:20.056580 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:20.058011 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:20.078534 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:20.160728 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:20.556698 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:20.558044 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:20.578573 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:20.661305 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:21.055749 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:21.057234 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:21.078640 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:21.162419 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:21.407789 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:21.556491 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:21.558398 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:21.578920 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:21.661739 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:22.056835 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:22.059358 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:22.078758 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:22.161223 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:22.557289 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:22.558276 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:22.578866 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:22.661269 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:23.056957 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:23.058542 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:23.079179 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:23.160926 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:23.556244 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:23.565480 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:23.579030 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:23.661406 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:23.907689 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:24.056686 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:24.058284 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:24.078803 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:24.161543 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:24.557645 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:24.559094 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:24.578505 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:24.661225 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:25.056915 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:25.058921 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:25.079371 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:25.161221 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:25.558548 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:25.560666 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:25.578985 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:25.661314 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:25.908555 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:26.057813 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:26.060606 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:26.079261 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:26.162092 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:26.559363 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:26.563308 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:26.579390 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:26.662424 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:27.057172 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:27.058133 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:27.078618 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:27.160769 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:27.557054 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:27.558322 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:27.578892 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:27.661134 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:28.056649 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:28.058426 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:28.078843 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:28.161113 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:28.407578 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:28.556884 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:28.558849 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:28.579213 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:28.661872 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:29.062391 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:29.064747 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:29.079349 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:29.161744 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:29.559485 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:29.559615 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:29.588437 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:29.661647 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:30.176028 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:30.178278 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:30.178550 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:30.179384 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:30.557737 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:30.559734 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:30.579399 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:30.661487 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:30.908526 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:31.058560 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:31.058776 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:31.079841 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:31.162063 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:31.557377 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:31.559871 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:31.579905 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:31.662325 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:32.058012 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:32.059445 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:32.079203 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:32.162061 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:32.556579 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:32.558342 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:32.579193 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:32.661445 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:33.056971 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:33.062490 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:33.079405 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:33.161773 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:33.407515 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:33.557161 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:33.559301 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:33.578865 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:33.662094 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:34.058311 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:34.063650 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:34.079156 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:34.161208 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:34.556999 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:34.559078 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:34.578785 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:34.661861 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:35.057159 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:35.058229 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:35.078820 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:35.161890 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:35.407603 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:35.556952 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:35.559040 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:35.579535 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:35.661201 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:36.057747 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:36.059208 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:36.078661 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:36.161392 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:36.558465 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:36.559070 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:36.578566 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:36.661888 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:37.057392 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:37.060584 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:37.079304 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:37.161650 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:37.564374 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:37.565926 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:37.579818 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:37.661622 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:37.907609 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:38.058278 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:38.058920 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:38.079175 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:38.161440 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:38.557591 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:38.559313 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:38.579040 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:38.661840 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:39.056666 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:39.058149 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:39.078850 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:39.161258 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:39.556921 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:39.558888 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:39.579613 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:39.661796 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:40.057861 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:40.059831 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:40.079270 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:40.161703 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:40.407456 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:40.556897 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:40.558705 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:40.579932 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:40.661833 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:41.056472 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:41.059744 2022781 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0813 03:31:41.059763 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:41.079609 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:41.161642 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:41.407679 2022781 node_ready.go:49] node "addons-20210813032940-2022292" has status "Ready":"True"
	I0813 03:31:41.407707 2022781 node_ready.go:38] duration metric: took 44.009250418s waiting for node "addons-20210813032940-2022292" to be "Ready" ...
	I0813 03:31:41.407716 2022781 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 03:31:41.415156 2022781 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-69x4l" in "kube-system" namespace to be "Ready" ...
	I0813 03:31:41.558078 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:41.560565 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:41.579199 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:41.661913 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:42.056679 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:42.059509 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:42.079025 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:42.161532 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:42.556679 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:42.559194 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:42.579531 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:42.660981 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:43.057202 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:43.059371 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:43.079006 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:43.161759 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:43.434237 2022781 pod_ready.go:102] pod "coredns-558bd4d5db-69x4l" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-13 03:30:56 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0813 03:31:43.558666 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:43.558977 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:43.579098 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:43.662316 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:44.056791 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:44.058613 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:44.079594 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:44.161076 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:44.556840 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:44.559275 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:44.578741 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:44.661566 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:45.057098 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:45.058829 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:45.079389 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:45.161883 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:45.436262 2022781 pod_ready.go:102] pod "coredns-558bd4d5db-69x4l" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-13 03:30:56 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0813 03:31:45.575150 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:45.584294 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:45.585026 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:45.661603 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:46.057182 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:46.059769 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:46.079195 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:46.162319 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:46.556459 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:46.559487 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:46.580902 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:46.661798 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:47.057717 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:47.059285 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:47.079021 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:47.161669 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:47.438260 2022781 pod_ready.go:102] pod "coredns-558bd4d5db-69x4l" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-13 03:30:56 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0813 03:31:47.559550 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:47.559922 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:47.579440 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:47.661740 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:48.057821 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:48.061063 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:48.079047 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:48.161877 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:48.559849 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:48.560623 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:48.579150 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:48.662015 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:49.056892 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:49.058463 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:49.078937 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:49.163554 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:49.557752 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:49.564778 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:49.579620 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:49.661737 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:49.933528 2022781 pod_ready.go:102] pod "coredns-558bd4d5db-69x4l" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-13 03:30:56 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0813 03:31:50.061568 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:50.062915 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:50.078976 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:50.161696 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:50.557144 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:50.559099 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:50.578749 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:50.661769 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:51.058325 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:51.060796 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:51.082729 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:51.165438 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:51.558033 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:51.560059 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:51.578642 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:51.661962 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:51.937811 2022781 pod_ready.go:102] pod "coredns-558bd4d5db-69x4l" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-13 03:31:51 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0813 03:31:52.058037 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:52.065290 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:52.082306 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:52.162234 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:52.561858 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:52.563177 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:52.579176 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:52.662893 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:53.058836 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:53.059610 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:53.079567 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:53.161708 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:53.556763 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:53.560389 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:53.580992 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:53.668937 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:54.066272 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:54.066848 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:54.083319 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:54.162629 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:54.436037 2022781 pod_ready.go:102] pod "coredns-558bd4d5db-69x4l" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-13 03:31:51 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0813 03:31:54.556924 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:54.559716 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:54.579214 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:54.662311 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:55.059249 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:55.061042 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:55.078970 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:55.161897 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:55.557313 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:55.559641 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:55.579729 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:55.661895 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:56.059000 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:56.062237 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:56.079500 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:56.161665 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:56.436902 2022781 pod_ready.go:102] pod "coredns-558bd4d5db-69x4l" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-13 03:31:51 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0813 03:31:56.566261 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:56.566676 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:56.578767 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:56.661644 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:57.057090 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:57.059556 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:57.079502 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:57.161359 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:57.557584 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:57.559448 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:57.579110 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:57.661572 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:58.057085 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:58.058884 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:58.079179 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:58.161220 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:58.538956 2022781 pod_ready.go:102] pod "coredns-558bd4d5db-69x4l" in "kube-system" namespace has status "Ready":"False"
	I0813 03:31:58.557350 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:58.559691 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:58.579943 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:58.791628 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:59.059276 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:59.061564 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:59.079752 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:59.161666 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:59.435689 2022781 pod_ready.go:92] pod "coredns-558bd4d5db-69x4l" in "kube-system" namespace has status "Ready":"True"
	I0813 03:31:59.435753 2022781 pod_ready.go:81] duration metric: took 18.020568142s waiting for pod "coredns-558bd4d5db-69x4l" in "kube-system" namespace to be "Ready" ...
	I0813 03:31:59.435791 2022781 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-20210813032940-2022292" in "kube-system" namespace to be "Ready" ...
	I0813 03:31:59.441594 2022781 pod_ready.go:92] pod "etcd-addons-20210813032940-2022292" in "kube-system" namespace has status "Ready":"True"
	I0813 03:31:59.441612 2022781 pod_ready.go:81] duration metric: took 5.786951ms waiting for pod "etcd-addons-20210813032940-2022292" in "kube-system" namespace to be "Ready" ...
	I0813 03:31:59.441623 2022781 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-20210813032940-2022292" in "kube-system" namespace to be "Ready" ...
	I0813 03:31:59.445277 2022781 pod_ready.go:92] pod "kube-apiserver-addons-20210813032940-2022292" in "kube-system" namespace has status "Ready":"True"
	I0813 03:31:59.445293 2022781 pod_ready.go:81] duration metric: took 3.644132ms waiting for pod "kube-apiserver-addons-20210813032940-2022292" in "kube-system" namespace to be "Ready" ...
	I0813 03:31:59.445302 2022781 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-20210813032940-2022292" in "kube-system" namespace to be "Ready" ...
	I0813 03:31:59.449072 2022781 pod_ready.go:92] pod "kube-controller-manager-addons-20210813032940-2022292" in "kube-system" namespace has status "Ready":"True"
	I0813 03:31:59.449092 2022781 pod_ready.go:81] duration metric: took 3.769744ms waiting for pod "kube-controller-manager-addons-20210813032940-2022292" in "kube-system" namespace to be "Ready" ...
	I0813 03:31:59.449101 2022781 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9knsw" in "kube-system" namespace to be "Ready" ...
	I0813 03:31:59.452876 2022781 pod_ready.go:92] pod "kube-proxy-9knsw" in "kube-system" namespace has status "Ready":"True"
	I0813 03:31:59.452894 2022781 pod_ready.go:81] duration metric: took 3.786088ms waiting for pod "kube-proxy-9knsw" in "kube-system" namespace to be "Ready" ...
	I0813 03:31:59.452902 2022781 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-20210813032940-2022292" in "kube-system" namespace to be "Ready" ...
	I0813 03:31:59.557933 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:59.559887 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:59.579750 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:59.661390 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:59.833698 2022781 pod_ready.go:92] pod "kube-scheduler-addons-20210813032940-2022292" in "kube-system" namespace has status "Ready":"True"
	I0813 03:31:59.833721 2022781 pod_ready.go:81] duration metric: took 380.809632ms waiting for pod "kube-scheduler-addons-20210813032940-2022292" in "kube-system" namespace to be "Ready" ...
	I0813 03:31:59.833732 2022781 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-77c99ccb96-vn6tn" in "kube-system" namespace to be "Ready" ...
	I0813 03:32:00.057507 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:00.060450 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:00.094038 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:00.162156 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:00.559531 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:00.560807 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:00.581504 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:00.662211 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:01.059861 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:01.066023 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:01.079504 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:01.161767 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:01.559334 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:01.561759 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:01.579832 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:01.662353 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:02.056935 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:02.059383 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:02.079185 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:02.161090 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:02.240408 2022781 pod_ready.go:102] pod "metrics-server-77c99ccb96-vn6tn" in "kube-system" namespace has status "Ready":"False"
	I0813 03:32:02.561477 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:02.562142 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:02.579306 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:02.662331 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:03.059012 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:03.059508 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:03.079655 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:03.162089 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:03.559560 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:03.561402 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:03.579678 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:03.661723 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:04.059263 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:04.059783 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:04.078943 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:04.162284 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:04.241030 2022781 pod_ready.go:102] pod "metrics-server-77c99ccb96-vn6tn" in "kube-system" namespace has status "Ready":"False"
	I0813 03:32:04.557153 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:04.560141 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:04.579728 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:04.662513 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:05.057080 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:05.059685 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:05.079032 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:05.162737 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:05.557144 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:05.559484 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:05.579579 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:05.661242 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:06.057042 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:06.059442 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:06.079396 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:06.162304 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:06.557204 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:06.559494 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:06.579157 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:06.661762 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:06.740372 2022781 pod_ready.go:102] pod "metrics-server-77c99ccb96-vn6tn" in "kube-system" namespace has status "Ready":"False"
	I0813 03:32:07.057455 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:07.059940 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:07.079231 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:07.161987 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:07.559284 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:07.559814 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:07.579364 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:07.662813 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:08.073362 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:08.079626 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:08.084214 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:08.163307 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:08.559617 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:08.560235 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:08.579114 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:08.662066 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:08.741784 2022781 pod_ready.go:102] pod "metrics-server-77c99ccb96-vn6tn" in "kube-system" namespace has status "Ready":"False"
	I0813 03:32:09.056830 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:09.059188 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:09.088691 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:09.162660 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:09.562253 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:09.563021 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:09.579227 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:09.661934 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:10.059459 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:10.063371 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:10.079184 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:10.175815 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:10.593011 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:10.593586 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:10.594885 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:10.661226 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:11.058162 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:11.058643 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:11.079722 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:11.161708 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:11.240182 2022781 pod_ready.go:102] pod "metrics-server-77c99ccb96-vn6tn" in "kube-system" namespace has status "Ready":"False"
	I0813 03:32:11.558205 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:11.560235 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:11.586837 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:11.661385 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:11.739491 2022781 pod_ready.go:92] pod "metrics-server-77c99ccb96-vn6tn" in "kube-system" namespace has status "Ready":"True"
	I0813 03:32:11.739550 2022781 pod_ready.go:81] duration metric: took 11.905794872s waiting for pod "metrics-server-77c99ccb96-vn6tn" in "kube-system" namespace to be "Ready" ...
	I0813 03:32:11.739581 2022781 pod_ready.go:38] duration metric: took 30.331840863s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 03:32:11.739626 2022781 api_server.go:50] waiting for apiserver process to appear ...
	I0813 03:32:11.739656 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0813 03:32:11.739748 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0813 03:32:11.812172 2022781 cri.go:76] found id: "e34ccd127601990a58a3b0bf970ea62cbd1f776d1e31437a7826d21db94646c3"
	I0813 03:32:11.812187 2022781 cri.go:76] found id: ""
	I0813 03:32:11.812193 2022781 logs.go:270] 1 containers: [e34ccd127601990a58a3b0bf970ea62cbd1f776d1e31437a7826d21db94646c3]
	I0813 03:32:11.812263 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:11.814780 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0813 03:32:11.814825 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0813 03:32:11.843019 2022781 cri.go:76] found id: "3c1ce4b5f6d51d1115ca194a201f0f2319ae8953a147cd7517cafbb1fa677e11"
	I0813 03:32:11.843063 2022781 cri.go:76] found id: ""
	I0813 03:32:11.843075 2022781 logs.go:270] 1 containers: [3c1ce4b5f6d51d1115ca194a201f0f2319ae8953a147cd7517cafbb1fa677e11]
	I0813 03:32:11.843111 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:11.845551 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0813 03:32:11.845596 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0813 03:32:11.867609 2022781 cri.go:76] found id: "76df34c67e4d8322520aef9bf10ede58a153c61504ac00600249a57066b783fd"
	I0813 03:32:11.867624 2022781 cri.go:76] found id: ""
	I0813 03:32:11.867630 2022781 logs.go:270] 1 containers: [76df34c67e4d8322520aef9bf10ede58a153c61504ac00600249a57066b783fd]
	I0813 03:32:11.867685 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:11.870294 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0813 03:32:11.870336 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0813 03:32:11.894045 2022781 cri.go:76] found id: "ecb0d384c34ed158fd62ade4f570a1ed8cfaca3ee6c53ab12a2de2f06720576b"
	I0813 03:32:11.894086 2022781 cri.go:76] found id: ""
	I0813 03:32:11.894097 2022781 logs.go:270] 1 containers: [ecb0d384c34ed158fd62ade4f570a1ed8cfaca3ee6c53ab12a2de2f06720576b]
	I0813 03:32:11.894130 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:11.896655 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0813 03:32:11.896697 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0813 03:32:11.925307 2022781 cri.go:76] found id: "b57e0dbb56f139a4e11ed434de0567755dd4ffdb08231f247c678c04a27d72ea"
	I0813 03:32:11.925323 2022781 cri.go:76] found id: ""
	I0813 03:32:11.925330 2022781 logs.go:270] 1 containers: [b57e0dbb56f139a4e11ed434de0567755dd4ffdb08231f247c678c04a27d72ea]
	I0813 03:32:11.925365 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:11.927899 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0813 03:32:11.927939 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0813 03:32:11.949495 2022781 cri.go:76] found id: ""
	I0813 03:32:11.949534 2022781 logs.go:270] 0 containers: []
	W0813 03:32:11.949546 2022781 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0813 03:32:11.949552 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0813 03:32:11.949587 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0813 03:32:11.971549 2022781 cri.go:76] found id: "f251119960206b400f3c5fa69a1475ae4017f1585aedfcae1ba3a5e41c29eaca"
	I0813 03:32:11.971566 2022781 cri.go:76] found id: ""
	I0813 03:32:11.971571 2022781 logs.go:270] 1 containers: [f251119960206b400f3c5fa69a1475ae4017f1585aedfcae1ba3a5e41c29eaca]
	I0813 03:32:11.971620 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:11.974054 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0813 03:32:11.974095 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0813 03:32:12.003331 2022781 cri.go:76] found id: "fb47330aab572a8f0160ba80c3c0231858b45b60f6be68e94341c50ea5e9a377"
	I0813 03:32:12.003378 2022781 cri.go:76] found id: ""
	I0813 03:32:12.003394 2022781 logs.go:270] 1 containers: [fb47330aab572a8f0160ba80c3c0231858b45b60f6be68e94341c50ea5e9a377]
	I0813 03:32:12.003459 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:12.006140 2022781 logs.go:123] Gathering logs for kube-scheduler [ecb0d384c34ed158fd62ade4f570a1ed8cfaca3ee6c53ab12a2de2f06720576b] ...
	I0813 03:32:12.006155 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecb0d384c34ed158fd62ade4f570a1ed8cfaca3ee6c53ab12a2de2f06720576b"
	I0813 03:32:12.033019 2022781 logs.go:123] Gathering logs for containerd ...
	I0813 03:32:12.033040 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0813 03:32:12.068283 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:12.068688 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:12.079690 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:12.118724 2022781 logs.go:123] Gathering logs for kubelet ...
	I0813 03:32:12.118744 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 03:32:12.161825 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0813 03:32:12.177299 2022781 logs.go:138] Found kubelet problem: Aug 13 03:30:56 addons-20210813032940-2022292 kubelet[1185]: E0813 03:30:56.615910    1185 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210813032940-2022292" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-20210813032940-2022292' and this object
	W0813 03:32:12.177544 2022781 logs.go:138] Found kubelet problem: Aug 13 03:30:56 addons-20210813032940-2022292 kubelet[1185]: E0813 03:30:56.615970    1185 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-20210813032940-2022292" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-20210813032940-2022292' and this object
	I0813 03:32:12.214534 2022781 logs.go:123] Gathering logs for dmesg ...
	I0813 03:32:12.214555 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 03:32:12.230395 2022781 logs.go:123] Gathering logs for etcd [3c1ce4b5f6d51d1115ca194a201f0f2319ae8953a147cd7517cafbb1fa677e11] ...
	I0813 03:32:12.230412 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c1ce4b5f6d51d1115ca194a201f0f2319ae8953a147cd7517cafbb1fa677e11"
	I0813 03:32:12.255618 2022781 logs.go:123] Gathering logs for coredns [76df34c67e4d8322520aef9bf10ede58a153c61504ac00600249a57066b783fd] ...
	I0813 03:32:12.255638 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76df34c67e4d8322520aef9bf10ede58a153c61504ac00600249a57066b783fd"
	I0813 03:32:12.276571 2022781 logs.go:123] Gathering logs for kube-proxy [b57e0dbb56f139a4e11ed434de0567755dd4ffdb08231f247c678c04a27d72ea] ...
	I0813 03:32:12.276592 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b57e0dbb56f139a4e11ed434de0567755dd4ffdb08231f247c678c04a27d72ea"
	I0813 03:32:12.300947 2022781 logs.go:123] Gathering logs for storage-provisioner [f251119960206b400f3c5fa69a1475ae4017f1585aedfcae1ba3a5e41c29eaca] ...
	I0813 03:32:12.300966 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f251119960206b400f3c5fa69a1475ae4017f1585aedfcae1ba3a5e41c29eaca"
	I0813 03:32:12.323282 2022781 logs.go:123] Gathering logs for kube-controller-manager [fb47330aab572a8f0160ba80c3c0231858b45b60f6be68e94341c50ea5e9a377] ...
	I0813 03:32:12.323301 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb47330aab572a8f0160ba80c3c0231858b45b60f6be68e94341c50ea5e9a377"
	I0813 03:32:12.375068 2022781 logs.go:123] Gathering logs for container status ...
	I0813 03:32:12.375093 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 03:32:12.401272 2022781 logs.go:123] Gathering logs for describe nodes ...
	I0813 03:32:12.401291 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 03:32:12.548904 2022781 logs.go:123] Gathering logs for kube-apiserver [e34ccd127601990a58a3b0bf970ea62cbd1f776d1e31437a7826d21db94646c3] ...
	I0813 03:32:12.548930 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e34ccd127601990a58a3b0bf970ea62cbd1f776d1e31437a7826d21db94646c3"
	I0813 03:32:12.559375 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:12.562030 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:12.579464 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:12.659062 2022781 out.go:311] Setting ErrFile to fd 2...
	I0813 03:32:12.659085 2022781 out.go:345] TERM=,COLORTERM=, which probably does not support color
	W0813 03:32:12.659194 2022781 out.go:242] X Problems detected in kubelet:
	W0813 03:32:12.659204 2022781 out.go:242]   Aug 13 03:30:56 addons-20210813032940-2022292 kubelet[1185]: E0813 03:30:56.615910    1185 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210813032940-2022292" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-20210813032940-2022292' and this object
	W0813 03:32:12.659211 2022781 out.go:242]   Aug 13 03:30:56 addons-20210813032940-2022292 kubelet[1185]: E0813 03:30:56.615970    1185 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-20210813032940-2022292" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-20210813032940-2022292' and this object
	I0813 03:32:12.659217 2022781 out.go:311] Setting ErrFile to fd 2...
	I0813 03:32:12.659225 2022781 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 03:32:12.673862 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:13.059325 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:13.062776 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:13.087927 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:13.162237 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:13.560080 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:13.565340 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:13.579524 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:13.661442 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:14.060788 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:14.062591 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:14.079441 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:14.161105 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:14.556863 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:14.558839 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:14.579411 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:14.661226 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:15.058603 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:15.060542 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:15.080085 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:15.162195 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:15.558043 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:15.559656 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:15.580190 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:15.665749 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:16.057055 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:16.059113 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:16.079769 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:16.161497 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:16.559200 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:16.560087 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:16.579835 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:16.662097 2022781 kapi.go:108] duration metric: took 1m13.096582398s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0813 03:32:16.664068 2022781 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-20210813032940-2022292 cluster.
	I0813 03:32:16.670830 2022781 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0813 03:32:16.676788 2022781 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0813 03:32:17.057899 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:17.059706 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:17.079747 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:17.558152 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:17.560601 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:17.579004 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:18.061308 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:18.062057 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:18.079880 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:18.558150 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:18.560757 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:18.579782 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:19.056963 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:19.059210 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:19.078839 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:19.557242 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:19.559678 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:19.579861 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:20.056923 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:20.060320 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:20.078955 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:20.560350 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:20.561208 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:20.579305 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:21.067467 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:21.067845 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:21.080703 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:21.556957 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:21.564057 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:21.579253 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:22.062724 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:22.065424 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:22.080480 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:22.558437 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:22.563395 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:22.580348 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:22.659953 2022781 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 03:32:22.694110 2022781 api_server.go:70] duration metric: took 1m25.501422314s to wait for apiserver process to appear ...
	I0813 03:32:22.694173 2022781 api_server.go:86] waiting for apiserver healthz status ...
	I0813 03:32:22.694205 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0813 03:32:22.694282 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0813 03:32:22.732785 2022781 cri.go:76] found id: "e34ccd127601990a58a3b0bf970ea62cbd1f776d1e31437a7826d21db94646c3"
	I0813 03:32:22.732842 2022781 cri.go:76] found id: ""
	I0813 03:32:22.732861 2022781 logs.go:270] 1 containers: [e34ccd127601990a58a3b0bf970ea62cbd1f776d1e31437a7826d21db94646c3]
	I0813 03:32:22.732936 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:22.736230 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0813 03:32:22.736312 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0813 03:32:22.765159 2022781 cri.go:76] found id: "3c1ce4b5f6d51d1115ca194a201f0f2319ae8953a147cd7517cafbb1fa677e11"
	I0813 03:32:22.765209 2022781 cri.go:76] found id: ""
	I0813 03:32:22.765228 2022781 logs.go:270] 1 containers: [3c1ce4b5f6d51d1115ca194a201f0f2319ae8953a147cd7517cafbb1fa677e11]
	I0813 03:32:22.765308 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:22.768400 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0813 03:32:22.768493 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0813 03:32:22.825296 2022781 cri.go:76] found id: "76df34c67e4d8322520aef9bf10ede58a153c61504ac00600249a57066b783fd"
	I0813 03:32:22.825357 2022781 cri.go:76] found id: ""
	I0813 03:32:22.825375 2022781 logs.go:270] 1 containers: [76df34c67e4d8322520aef9bf10ede58a153c61504ac00600249a57066b783fd]
	I0813 03:32:22.825450 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:22.829195 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0813 03:32:22.829285 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0813 03:32:22.881181 2022781 cri.go:76] found id: "ecb0d384c34ed158fd62ade4f570a1ed8cfaca3ee6c53ab12a2de2f06720576b"
	I0813 03:32:22.881242 2022781 cri.go:76] found id: ""
	I0813 03:32:22.881259 2022781 logs.go:270] 1 containers: [ecb0d384c34ed158fd62ade4f570a1ed8cfaca3ee6c53ab12a2de2f06720576b]
	I0813 03:32:22.881329 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:22.885820 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0813 03:32:22.885908 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0813 03:32:22.982661 2022781 cri.go:76] found id: "b57e0dbb56f139a4e11ed434de0567755dd4ffdb08231f247c678c04a27d72ea"
	I0813 03:32:22.982714 2022781 cri.go:76] found id: ""
	I0813 03:32:22.982741 2022781 logs.go:270] 1 containers: [b57e0dbb56f139a4e11ed434de0567755dd4ffdb08231f247c678c04a27d72ea]
	I0813 03:32:22.982813 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:22.986328 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0813 03:32:22.986378 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0813 03:32:23.027000 2022781 cri.go:76] found id: ""
	I0813 03:32:23.027018 2022781 logs.go:270] 0 containers: []
	W0813 03:32:23.027025 2022781 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0813 03:32:23.027032 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0813 03:32:23.027083 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0813 03:32:23.060139 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:23.061278 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:23.069559 2022781 cri.go:76] found id: "f251119960206b400f3c5fa69a1475ae4017f1585aedfcae1ba3a5e41c29eaca"
	I0813 03:32:23.069608 2022781 cri.go:76] found id: ""
	I0813 03:32:23.069627 2022781 logs.go:270] 1 containers: [f251119960206b400f3c5fa69a1475ae4017f1585aedfcae1ba3a5e41c29eaca]
	I0813 03:32:23.069694 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:23.074554 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0813 03:32:23.074645 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0813 03:32:23.080185 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:23.110198 2022781 cri.go:76] found id: "fb47330aab572a8f0160ba80c3c0231858b45b60f6be68e94341c50ea5e9a377"
	I0813 03:32:23.110256 2022781 cri.go:76] found id: ""
	I0813 03:32:23.110274 2022781 logs.go:270] 1 containers: [fb47330aab572a8f0160ba80c3c0231858b45b60f6be68e94341c50ea5e9a377]
	I0813 03:32:23.110353 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:23.114338 2022781 logs.go:123] Gathering logs for containerd ...
	I0813 03:32:23.114442 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0813 03:32:23.226451 2022781 logs.go:123] Gathering logs for dmesg ...
	I0813 03:32:23.226481 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 03:32:23.241683 2022781 logs.go:123] Gathering logs for describe nodes ...
	I0813 03:32:23.241711 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 03:32:23.499313 2022781 logs.go:123] Gathering logs for kube-apiserver [e34ccd127601990a58a3b0bf970ea62cbd1f776d1e31437a7826d21db94646c3] ...
	I0813 03:32:23.499341 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e34ccd127601990a58a3b0bf970ea62cbd1f776d1e31437a7826d21db94646c3"
	I0813 03:32:23.556000 2022781 logs.go:123] Gathering logs for kube-controller-manager [fb47330aab572a8f0160ba80c3c0231858b45b60f6be68e94341c50ea5e9a377] ...
	I0813 03:32:23.556029 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb47330aab572a8f0160ba80c3c0231858b45b60f6be68e94341c50ea5e9a377"
	I0813 03:32:23.561069 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:23.563716 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:23.580520 2022781 kapi.go:108] duration metric: took 1m22.560942925s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0813 03:32:23.620196 2022781 logs.go:123] Gathering logs for kube-proxy [b57e0dbb56f139a4e11ed434de0567755dd4ffdb08231f247c678c04a27d72ea] ...
	I0813 03:32:23.620226 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b57e0dbb56f139a4e11ed434de0567755dd4ffdb08231f247c678c04a27d72ea"
	I0813 03:32:23.649357 2022781 logs.go:123] Gathering logs for storage-provisioner [f251119960206b400f3c5fa69a1475ae4017f1585aedfcae1ba3a5e41c29eaca] ...
	I0813 03:32:23.649384 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f251119960206b400f3c5fa69a1475ae4017f1585aedfcae1ba3a5e41c29eaca"
	I0813 03:32:23.680023 2022781 logs.go:123] Gathering logs for container status ...
	I0813 03:32:23.680049 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 03:32:23.722823 2022781 logs.go:123] Gathering logs for kubelet ...
	I0813 03:32:23.722853 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0813 03:32:23.784859 2022781 logs.go:138] Found kubelet problem: Aug 13 03:30:56 addons-20210813032940-2022292 kubelet[1185]: E0813 03:30:56.615910    1185 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210813032940-2022292" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-20210813032940-2022292' and this object
	W0813 03:32:23.785152 2022781 logs.go:138] Found kubelet problem: Aug 13 03:30:56 addons-20210813032940-2022292 kubelet[1185]: E0813 03:30:56.615970    1185 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-20210813032940-2022292" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-20210813032940-2022292' and this object
	I0813 03:32:23.823338 2022781 logs.go:123] Gathering logs for etcd [3c1ce4b5f6d51d1115ca194a201f0f2319ae8953a147cd7517cafbb1fa677e11] ...
	I0813 03:32:23.823392 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c1ce4b5f6d51d1115ca194a201f0f2319ae8953a147cd7517cafbb1fa677e11"
	I0813 03:32:23.857325 2022781 logs.go:123] Gathering logs for coredns [76df34c67e4d8322520aef9bf10ede58a153c61504ac00600249a57066b783fd] ...
	I0813 03:32:23.857354 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76df34c67e4d8322520aef9bf10ede58a153c61504ac00600249a57066b783fd"
	I0813 03:32:23.888625 2022781 logs.go:123] Gathering logs for kube-scheduler [ecb0d384c34ed158fd62ade4f570a1ed8cfaca3ee6c53ab12a2de2f06720576b] ...
	I0813 03:32:23.888649 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecb0d384c34ed158fd62ade4f570a1ed8cfaca3ee6c53ab12a2de2f06720576b"
	I0813 03:32:23.943578 2022781 out.go:311] Setting ErrFile to fd 2...
	I0813 03:32:23.943633 2022781 out.go:345] TERM=,COLORTERM=, which probably does not support color
	W0813 03:32:23.943825 2022781 out.go:242] X Problems detected in kubelet:
	W0813 03:32:23.943838 2022781 out.go:242]   Aug 13 03:30:56 addons-20210813032940-2022292 kubelet[1185]: E0813 03:30:56.615910    1185 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210813032940-2022292" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-20210813032940-2022292' and this object
	W0813 03:32:23.943847 2022781 out.go:242]   Aug 13 03:30:56 addons-20210813032940-2022292 kubelet[1185]: E0813 03:30:56.615970    1185 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-20210813032940-2022292" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-20210813032940-2022292' and this object
	I0813 03:32:23.943858 2022781 out.go:311] Setting ErrFile to fd 2...
	I0813 03:32:23.943863 2022781 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 03:32:24.063676 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:24.065290 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:24.557303 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:24.559593 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:25.058802 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:25.059726 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:25.557197 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:25.559980 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:26.057045 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:26.059553 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:26.556742 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:26.559573 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:27.058797 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:27.065273 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:27.557423 2022781 kapi.go:108] duration metric: took 1m22.555695527s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0813 03:32:27.567926 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:28.058729 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:28.558602 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:29.058767 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:29.558686 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:30.059333 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:30.558577 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:31.057998 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:31.558541 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:32.058329 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:32.558334 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:33.058677 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:33.559382 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:33.945317 2022781 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0813 03:32:33.954121 2022781 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0813 03:32:33.955027 2022781 api_server.go:139] control plane version: v1.21.3
	I0813 03:32:33.955048 2022781 api_server.go:129] duration metric: took 11.260858091s to wait for apiserver health ...
	I0813 03:32:33.955057 2022781 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 03:32:33.955075 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0813 03:32:33.955134 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0813 03:32:33.989217 2022781 cri.go:76] found id: "e34ccd127601990a58a3b0bf970ea62cbd1f776d1e31437a7826d21db94646c3"
	I0813 03:32:33.989241 2022781 cri.go:76] found id: ""
	I0813 03:32:33.989246 2022781 logs.go:270] 1 containers: [e34ccd127601990a58a3b0bf970ea62cbd1f776d1e31437a7826d21db94646c3]
	I0813 03:32:33.989289 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:33.991827 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0813 03:32:33.991873 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0813 03:32:34.015261 2022781 cri.go:76] found id: "3c1ce4b5f6d51d1115ca194a201f0f2319ae8953a147cd7517cafbb1fa677e11"
	I0813 03:32:34.015279 2022781 cri.go:76] found id: ""
	I0813 03:32:34.015285 2022781 logs.go:270] 1 containers: [3c1ce4b5f6d51d1115ca194a201f0f2319ae8953a147cd7517cafbb1fa677e11]
	I0813 03:32:34.015324 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:34.017874 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0813 03:32:34.017921 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0813 03:32:34.040647 2022781 cri.go:76] found id: "76df34c67e4d8322520aef9bf10ede58a153c61504ac00600249a57066b783fd"
	I0813 03:32:34.040664 2022781 cri.go:76] found id: ""
	I0813 03:32:34.040669 2022781 logs.go:270] 1 containers: [76df34c67e4d8322520aef9bf10ede58a153c61504ac00600249a57066b783fd]
	I0813 03:32:34.040711 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:34.043319 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0813 03:32:34.043370 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0813 03:32:34.059259 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:34.069010 2022781 cri.go:76] found id: "ecb0d384c34ed158fd62ade4f570a1ed8cfaca3ee6c53ab12a2de2f06720576b"
	I0813 03:32:34.069034 2022781 cri.go:76] found id: ""
	I0813 03:32:34.069040 2022781 logs.go:270] 1 containers: [ecb0d384c34ed158fd62ade4f570a1ed8cfaca3ee6c53ab12a2de2f06720576b]
	I0813 03:32:34.069080 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:34.071835 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0813 03:32:34.071887 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0813 03:32:34.095116 2022781 cri.go:76] found id: "b57e0dbb56f139a4e11ed434de0567755dd4ffdb08231f247c678c04a27d72ea"
	I0813 03:32:34.095139 2022781 cri.go:76] found id: ""
	I0813 03:32:34.095145 2022781 logs.go:270] 1 containers: [b57e0dbb56f139a4e11ed434de0567755dd4ffdb08231f247c678c04a27d72ea]
	I0813 03:32:34.095190 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:34.097821 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0813 03:32:34.097868 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0813 03:32:34.119306 2022781 cri.go:76] found id: ""
	I0813 03:32:34.119322 2022781 logs.go:270] 0 containers: []
	W0813 03:32:34.119328 2022781 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0813 03:32:34.119334 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0813 03:32:34.119379 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0813 03:32:34.142251 2022781 cri.go:76] found id: "f251119960206b400f3c5fa69a1475ae4017f1585aedfcae1ba3a5e41c29eaca"
	I0813 03:32:34.142273 2022781 cri.go:76] found id: ""
	I0813 03:32:34.142279 2022781 logs.go:270] 1 containers: [f251119960206b400f3c5fa69a1475ae4017f1585aedfcae1ba3a5e41c29eaca]
	I0813 03:32:34.142334 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:34.144992 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0813 03:32:34.145041 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0813 03:32:34.167211 2022781 cri.go:76] found id: "fb47330aab572a8f0160ba80c3c0231858b45b60f6be68e94341c50ea5e9a377"
	I0813 03:32:34.167227 2022781 cri.go:76] found id: ""
	I0813 03:32:34.167232 2022781 logs.go:270] 1 containers: [fb47330aab572a8f0160ba80c3c0231858b45b60f6be68e94341c50ea5e9a377]
	I0813 03:32:34.167272 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:34.169792 2022781 logs.go:123] Gathering logs for kube-controller-manager [fb47330aab572a8f0160ba80c3c0231858b45b60f6be68e94341c50ea5e9a377] ...
	I0813 03:32:34.169810 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb47330aab572a8f0160ba80c3c0231858b45b60f6be68e94341c50ea5e9a377"
	I0813 03:32:34.216311 2022781 logs.go:123] Gathering logs for containerd ...
	I0813 03:32:34.216383 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0813 03:32:34.298041 2022781 logs.go:123] Gathering logs for dmesg ...
	I0813 03:32:34.298069 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 03:32:34.310283 2022781 logs.go:123] Gathering logs for describe nodes ...
	I0813 03:32:34.310309 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 03:32:34.442777 2022781 logs.go:123] Gathering logs for kube-apiserver [e34ccd127601990a58a3b0bf970ea62cbd1f776d1e31437a7826d21db94646c3] ...
	I0813 03:32:34.442803 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e34ccd127601990a58a3b0bf970ea62cbd1f776d1e31437a7826d21db94646c3"
	I0813 03:32:34.491851 2022781 logs.go:123] Gathering logs for etcd [3c1ce4b5f6d51d1115ca194a201f0f2319ae8953a147cd7517cafbb1fa677e11] ...
	I0813 03:32:34.491910 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c1ce4b5f6d51d1115ca194a201f0f2319ae8953a147cd7517cafbb1fa677e11"
	I0813 03:32:34.521318 2022781 logs.go:123] Gathering logs for kube-proxy [b57e0dbb56f139a4e11ed434de0567755dd4ffdb08231f247c678c04a27d72ea] ...
	I0813 03:32:34.521347 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b57e0dbb56f139a4e11ed434de0567755dd4ffdb08231f247c678c04a27d72ea"
	I0813 03:32:34.544930 2022781 logs.go:123] Gathering logs for storage-provisioner [f251119960206b400f3c5fa69a1475ae4017f1585aedfcae1ba3a5e41c29eaca] ...
	I0813 03:32:34.544954 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f251119960206b400f3c5fa69a1475ae4017f1585aedfcae1ba3a5e41c29eaca"
	I0813 03:32:34.558879 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:34.568978 2022781 logs.go:123] Gathering logs for container status ...
	I0813 03:32:34.569002 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 03:32:34.595398 2022781 logs.go:123] Gathering logs for kubelet ...
	I0813 03:32:34.595422 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0813 03:32:34.648293 2022781 logs.go:138] Found kubelet problem: Aug 13 03:30:56 addons-20210813032940-2022292 kubelet[1185]: E0813 03:30:56.615910    1185 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210813032940-2022292" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-20210813032940-2022292' and this object
	W0813 03:32:34.648542 2022781 logs.go:138] Found kubelet problem: Aug 13 03:30:56 addons-20210813032940-2022292 kubelet[1185]: E0813 03:30:56.615970    1185 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-20210813032940-2022292" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-20210813032940-2022292' and this object
	I0813 03:32:34.694139 2022781 logs.go:123] Gathering logs for coredns [76df34c67e4d8322520aef9bf10ede58a153c61504ac00600249a57066b783fd] ...
	I0813 03:32:34.694164 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76df34c67e4d8322520aef9bf10ede58a153c61504ac00600249a57066b783fd"
	I0813 03:32:34.716899 2022781 logs.go:123] Gathering logs for kube-scheduler [ecb0d384c34ed158fd62ade4f570a1ed8cfaca3ee6c53ab12a2de2f06720576b] ...
	I0813 03:32:34.716924 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecb0d384c34ed158fd62ade4f570a1ed8cfaca3ee6c53ab12a2de2f06720576b"
	I0813 03:32:34.743237 2022781 out.go:311] Setting ErrFile to fd 2...
	I0813 03:32:34.743258 2022781 out.go:345] TERM=,COLORTERM=, which probably does not support color
	W0813 03:32:34.743376 2022781 out.go:242] X Problems detected in kubelet:
	W0813 03:32:34.743389 2022781 out.go:242]   Aug 13 03:30:56 addons-20210813032940-2022292 kubelet[1185]: E0813 03:30:56.615910    1185 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210813032940-2022292" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-20210813032940-2022292' and this object
	W0813 03:32:34.743396 2022781 out.go:242]   Aug 13 03:30:56 addons-20210813032940-2022292 kubelet[1185]: E0813 03:30:56.615970    1185 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-20210813032940-2022292" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-20210813032940-2022292' and this object
	I0813 03:32:34.743409 2022781 out.go:311] Setting ErrFile to fd 2...
	I0813 03:32:34.743414 2022781 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 03:32:35.059544 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:35.558823 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:36.058742 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:36.558321 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:37.059068 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:37.559539 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:38.059128 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:38.559174 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:39.058759 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:39.558560 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:40.059643 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:40.558958 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:41.058990 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:41.559162 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:42.057900 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:42.558835 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:43.059061 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:43.559393 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:44.058040 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:44.558542 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:44.758638 2022781 system_pods.go:59] 18 kube-system pods found
	I0813 03:32:44.758676 2022781 system_pods.go:61] "coredns-558bd4d5db-69x4l" [ef73518e-08da-4a27-a504-85f6e14fde4e] Running
	I0813 03:32:44.758682 2022781 system_pods.go:61] "csi-hostpath-attacher-0" [5b8c9e1d-36af-484a-8f71-8cbdc93e1848] Running
	I0813 03:32:44.758686 2022781 system_pods.go:61] "csi-hostpath-provisioner-0" [d285e137-046a-4c9f-8a5c-b513a07b4ac1] Running
	I0813 03:32:44.758691 2022781 system_pods.go:61] "csi-hostpath-resizer-0" [d005f1b6-ee22-4ef7-aabe-9ad79b904d8e] Running
	I0813 03:32:44.758696 2022781 system_pods.go:61] "csi-hostpath-snapshotter-0" [203926d6-73de-49f5-8477-3d1cf26d233e] Running
	I0813 03:32:44.758701 2022781 system_pods.go:61] "csi-hostpathplugin-0" [e5a40ad3-af6f-4aca-9d33-5d9620d28d85] Running
	I0813 03:32:44.758707 2022781 system_pods.go:61] "etcd-addons-20210813032940-2022292" [5e80a189-29fa-44b4-b290-7896746c4542] Running
	I0813 03:32:44.758712 2022781 system_pods.go:61] "kindnet-6qhgq" [41b60387-4d90-4496-a617-d04aaf6d654a] Running
	I0813 03:32:44.758717 2022781 system_pods.go:61] "kube-apiserver-addons-20210813032940-2022292" [e344bbc9-9190-49fe-915e-c8460a1fbe6e] Running
	I0813 03:32:44.758727 2022781 system_pods.go:61] "kube-controller-manager-addons-20210813032940-2022292" [d2701d00-a6a6-4ab5-b211-39592390ce8e] Running
	I0813 03:32:44.758731 2022781 system_pods.go:61] "kube-proxy-9knsw" [05bf3f71-808d-4e24-a416-a4434e16e0ac] Running
	I0813 03:32:44.758743 2022781 system_pods.go:61] "kube-scheduler-addons-20210813032940-2022292" [6488f7a4-94e7-41c1-b202-305d463dfac2] Running
	I0813 03:32:44.758747 2022781 system_pods.go:61] "metrics-server-77c99ccb96-vn6tn" [985bccb5-7c0b-4df0-91ce-0cd5e67a9688] Running
	I0813 03:32:44.758755 2022781 system_pods.go:61] "registry-5f6m6" [b842920d-03bf-4426-9765-5fb36b90afb9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0813 03:32:44.758768 2022781 system_pods.go:61] "registry-proxy-dg8n7" [3af763b1-1ec2-4a5b-b1c6-9f9a9c21d031] Running / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0813 03:32:44.758774 2022781 system_pods.go:61] "snapshot-controller-989f9ddc8-6wzsp" [4b25bcd7-a3bd-4549-9476-87a13b4022d1] Running
	I0813 03:32:44.758779 2022781 system_pods.go:61] "snapshot-controller-989f9ddc8-shj76" [0d371d4d-113a-4eb0-bb5d-4d52d2ecf7a5] Running
	I0813 03:32:44.758784 2022781 system_pods.go:61] "storage-provisioner" [9788a546-bd3b-45bb-98c8-f5dc3efa1001] Running
	I0813 03:32:44.758792 2022781 system_pods.go:74] duration metric: took 10.803729677s to wait for pod list to return data ...
	I0813 03:32:44.758802 2022781 default_sa.go:34] waiting for default service account to be created ...
	I0813 03:32:44.761387 2022781 default_sa.go:45] found service account: "default"
	I0813 03:32:44.761408 2022781 default_sa.go:55] duration metric: took 2.593402ms for default service account to be created ...
	I0813 03:32:44.761414 2022781 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 03:32:44.774394 2022781 system_pods.go:86] 18 kube-system pods found
	I0813 03:32:44.774420 2022781 system_pods.go:89] "coredns-558bd4d5db-69x4l" [ef73518e-08da-4a27-a504-85f6e14fde4e] Running
	I0813 03:32:44.774427 2022781 system_pods.go:89] "csi-hostpath-attacher-0" [5b8c9e1d-36af-484a-8f71-8cbdc93e1848] Running
	I0813 03:32:44.774432 2022781 system_pods.go:89] "csi-hostpath-provisioner-0" [d285e137-046a-4c9f-8a5c-b513a07b4ac1] Running
	I0813 03:32:44.774441 2022781 system_pods.go:89] "csi-hostpath-resizer-0" [d005f1b6-ee22-4ef7-aabe-9ad79b904d8e] Running
	I0813 03:32:44.774451 2022781 system_pods.go:89] "csi-hostpath-snapshotter-0" [203926d6-73de-49f5-8477-3d1cf26d233e] Running
	I0813 03:32:44.774456 2022781 system_pods.go:89] "csi-hostpathplugin-0" [e5a40ad3-af6f-4aca-9d33-5d9620d28d85] Running
	I0813 03:32:44.774464 2022781 system_pods.go:89] "etcd-addons-20210813032940-2022292" [5e80a189-29fa-44b4-b290-7896746c4542] Running
	I0813 03:32:44.774469 2022781 system_pods.go:89] "kindnet-6qhgq" [41b60387-4d90-4496-a617-d04aaf6d654a] Running
	I0813 03:32:44.774475 2022781 system_pods.go:89] "kube-apiserver-addons-20210813032940-2022292" [e344bbc9-9190-49fe-915e-c8460a1fbe6e] Running
	I0813 03:32:44.774484 2022781 system_pods.go:89] "kube-controller-manager-addons-20210813032940-2022292" [d2701d00-a6a6-4ab5-b211-39592390ce8e] Running
	I0813 03:32:44.774489 2022781 system_pods.go:89] "kube-proxy-9knsw" [05bf3f71-808d-4e24-a416-a4434e16e0ac] Running
	I0813 03:32:44.774497 2022781 system_pods.go:89] "kube-scheduler-addons-20210813032940-2022292" [6488f7a4-94e7-41c1-b202-305d463dfac2] Running
	I0813 03:32:44.774502 2022781 system_pods.go:89] "metrics-server-77c99ccb96-vn6tn" [985bccb5-7c0b-4df0-91ce-0cd5e67a9688] Running
	I0813 03:32:44.774514 2022781 system_pods.go:89] "registry-5f6m6" [b842920d-03bf-4426-9765-5fb36b90afb9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0813 03:32:44.774522 2022781 system_pods.go:89] "registry-proxy-dg8n7" [3af763b1-1ec2-4a5b-b1c6-9f9a9c21d031] Running / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0813 03:32:44.774530 2022781 system_pods.go:89] "snapshot-controller-989f9ddc8-6wzsp" [4b25bcd7-a3bd-4549-9476-87a13b4022d1] Running
	I0813 03:32:44.774536 2022781 system_pods.go:89] "snapshot-controller-989f9ddc8-shj76" [0d371d4d-113a-4eb0-bb5d-4d52d2ecf7a5] Running
	I0813 03:32:44.774545 2022781 system_pods.go:89] "storage-provisioner" [9788a546-bd3b-45bb-98c8-f5dc3efa1001] Running
	I0813 03:32:44.774550 2022781 system_pods.go:126] duration metric: took 13.132138ms to wait for k8s-apps to be running ...
	I0813 03:32:44.774559 2022781 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 03:32:44.774606 2022781 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 03:32:44.786726 2022781 system_svc.go:56] duration metric: took 12.16177ms WaitForService to wait for kubelet.
	I0813 03:32:44.786742 2022781 kubeadm.go:547] duration metric: took 1m47.594069487s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 03:32:44.786764 2022781 node_conditions.go:102] verifying NodePressure condition ...
	I0813 03:32:44.790042 2022781 node_conditions.go:122] node storage ephemeral capacity is 40474572Ki
	I0813 03:32:44.790072 2022781 node_conditions.go:123] node cpu capacity is 2
	I0813 03:32:44.790084 2022781 node_conditions.go:105] duration metric: took 3.315795ms to run NodePressure ...
	I0813 03:32:44.790093 2022781 start.go:231] waiting for startup goroutines ...
	I0813 03:32:45.059527 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:45.558450 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:46.058008 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:46.559409 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:47.058981 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:47.559418 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:48.058295 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:48.559417 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:49.058479 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:49.558767 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:50.059036 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:50.559080 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:51.058876 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:51.559246 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:52.059058 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:52.559792 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:53.059872 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:53.559369 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:54.059719 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:54.558692 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:55.059624 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:55.558707 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:56.058551 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:56.558705 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:57.059182 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:57.558288 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:58.058885 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:58.559208 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:59.058341 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:59.558602 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:00.058816 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:00.559084 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:01.065657 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:01.558657 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:02.059321 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:02.559541 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:03.059005 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:03.559631 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:04.058927 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:04.558678 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:05.059707 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:05.558780 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:06.058956 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:06.559528 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:07.059164 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:07.558609 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:08.059085 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:08.558274 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:09.058026 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:09.558984 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:10.058747 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:10.558098 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:11.058941 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:11.559199 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:12.059601 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:12.562803 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:13.058537 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:13.558131 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:14.059592 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:14.558844 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:15.059696 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:15.559326 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:16.059259 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:16.558421 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:17.058713 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:17.558903 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:18.059615 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:18.558863 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:19.058406 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:19.558672 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:20.059847 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:20.558399 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:21.137157 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:21.558717 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:22.059791 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:22.559467 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:23.058838 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:23.558887 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:24.060067 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:24.559548 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:25.058905 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:25.558800 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:26.059414 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:26.558703 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:27.059411 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:27.559626 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:28.059432 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:28.559361 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:29.059128 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:29.558985 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:30.062482 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:30.558231 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:31.058771 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:31.558406 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:32.059047 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:32.559142 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:33.058540 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:33.557931 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:34.058498 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:34.558400 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:35.058954 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:35.559196 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:36.059489 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:36.558425 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:37.059036 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:37.559521 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:38.059863 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:38.558898 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:39.059513 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:39.558045 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:40.059686 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:40.560770 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:41.059054 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:41.566863 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:42.059037 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:42.558754 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:43.060014 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:43.558325 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:44.058848 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:44.558941 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:45.059055 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:45.559095 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:46.059579 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:46.558476 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:47.059065 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:47.559727 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:48.058950 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:48.559632 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:49.075110 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:49.558722 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:50.058944 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:50.558547 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:51.058757 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:51.559264 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:52.059295 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:52.558567 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:53.059202 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:53.558807 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:54.066706 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:54.559004 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:55.059681 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:55.558717 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:56.059365 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:56.558292 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:57.059264 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:57.558279 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:58.058461 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:58.558727 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:59.059322 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:59.559163 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:00.058942 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:00.558919 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:01.058921 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:01.558792 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:02.058858 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:02.558354 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:03.058655 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:03.558982 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:04.059730 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:04.558307 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:05.059036 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:05.559489 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:06.059148 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:06.558284 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:07.059011 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:07.559645 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:08.059884 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:08.558543 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:09.059046 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:09.559657 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:10.059052 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:10.559013 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:11.058582 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:11.558297 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:12.059000 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:12.558873 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:13.059650 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:13.559364 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:14.059969 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:14.565763 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:15.059630 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:15.558772 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:16.059717 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:16.559317 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:17.059424 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:17.558716 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:18.058577 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:18.557995 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:19.059142 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:19.558516 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:20.058731 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:20.558597 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:21.058881 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:21.639918 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:22.059768 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:22.558594 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:23.058973 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:23.558827 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:24.059270 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:24.558647 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:25.059468 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:25.559237 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:26.058932 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:26.559092 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:27.059169 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:27.559544 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:28.064873 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:28.559416 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:29.058950 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:29.559331 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:30.059119 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:30.558882 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:31.059118 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:31.558810 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:32.058711 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:32.559098 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:33.059059 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:33.559380 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:34.059186 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:34.558928 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:35.059958 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:35.559653 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:36.059818 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:36.559509 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:37.058808 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:37.559390 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:38.059732 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:38.558788 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:39.059321 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:39.558685 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:40.069653 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:40.559011 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:41.058420 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:41.558616 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:42.058600 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:42.558619 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:43.058712 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:43.558863 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:44.059093 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:44.557925 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:45.059096 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:45.558755 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:46.059372 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:46.558486 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:47.058418 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:47.558666 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:48.059423 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:48.559159 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:49.059355 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:49.558764 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:50.058409 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:50.558418 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:51.058627 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:51.565864 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:52.059506 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:52.559260 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:53.059527 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:53.558611 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:54.058863 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:54.559270 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:55.058897 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:55.559362 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:56.059376 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:56.558455 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:57.058989 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:57.558669 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:58.061153 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:58.559011 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:59.058914 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:59.559446 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:00.059100 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:00.558679 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:01.059516 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:01.559073 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:02.058157 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:02.559296 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:03.062289 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:03.558892 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:04.059315 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:04.558372 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:05.058655 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:05.558400 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:06.059479 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:06.559682 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:07.059026 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:07.558682 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:08.059829 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:08.558168 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:09.058282 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:09.558248 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:10.058659 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:10.558729 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:11.058984 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:11.558444 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:12.063399 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:12.559537 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:13.058028 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:13.559081 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:14.062176 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:14.558669 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:15.058706 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:15.558218 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:16.059595 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:16.558859 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:17.063096 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:17.558100 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:18.058344 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:18.558730 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:19.058484 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:19.558765 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:20.058970 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:20.559332 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:21.058416 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:21.559555 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:22.058838 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:22.558992 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:23.059161 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:23.558649 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:24.059506 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:24.558809 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:25.058701 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:25.560059 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:26.066673 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:26.559251 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:27.058215 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:27.558720 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:28.059602 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:28.558925 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:29.059077 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:29.558468 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:30.058745 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:30.559228 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:31.057902 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:31.559288 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:32.058870 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:32.558785 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:33.059631 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:33.558142 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:34.058639 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:34.558642 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:35.058709 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:35.558816 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:36.058874 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:36.558786 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:37.069666 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:37.562046 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:38.059673 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:38.558403 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:39.058697 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:39.558063 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:40.059072 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:40.559148 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:41.058169 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:41.559113 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:42.060212 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:42.558264 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:43.058718 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:43.558468 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:44.059745 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:44.558972 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:45.059760 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:45.558794 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:46.059066 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:46.558262 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:47.058933 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:47.558864 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:48.059706 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:48.559293 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:49.059773 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:49.567350 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:50.058758 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:50.559247 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:51.058051 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:51.559512 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:52.059753 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:52.558876 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:53.058844 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:53.557996 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:54.059339 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:54.557861 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:55.058559 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:55.558563 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:56.059269 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:56.559290 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:57.058409 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:57.558536 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:58.059072 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:58.559398 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:59.058700 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:59.558646 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:00.058702 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:00.558682 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:01.058670 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:01.562293 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:02.058643 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:02.558304 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:03.059405 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:03.559047 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:04.062121 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:04.558683 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:05.059301 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:05.558453 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:06.059156 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:06.558705 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:07.059273 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:07.558538 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:08.059190 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:08.559068 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:09.059945 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:09.558740 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:10.059106 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:10.558472 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:11.059243 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:11.558710 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:12.059732 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:12.558567 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:13.059209 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:13.558794 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:14.063337 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:14.558442 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:15.058715 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:15.559174 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:16.058830 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:16.559240 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:17.058916 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:17.559538 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:18.058831 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:18.558826 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:19.058516 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:19.558183 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:20.058188 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:20.558110 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:21.058024 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:21.558235 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:22.058995 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:22.619655 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:23.059154 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:23.558587 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:24.059440 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:24.558332 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:25.059619 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:25.558688 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:26.062144 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:26.558312 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:27.059458 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:27.559085 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:28.058413 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:28.558317 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:29.059543 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:29.557925 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:30.058612 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:30.558558 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:31.058092 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:31.558239 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:32.059323 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:32.558001 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:33.059257 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:33.558742 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:34.058873 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:34.558418 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:35.058383 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:35.558235 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:36.058836 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:36.559150 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:37.059481 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:37.558194 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:38.058814 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:38.559211 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:39.059575 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:39.558350 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:40.058603 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:40.559147 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:41.059124 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:41.558651 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:42.059562 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:42.558506 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:43.058986 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:43.558651 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:44.059589 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:44.558516 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:45.059125 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:45.558154 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:46.058948 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:46.558804 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:47.059781 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:47.558728 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:48.059027 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:48.559730 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:49.059279 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:49.578391 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:50.058436 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:50.558971 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:51.058593 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:51.558295 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:52.058886 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:52.559329 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:53.058756 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:53.557754 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:54.059400 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:54.558199 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:55.058119 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:55.560244 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:56.059004 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:56.561964 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:57.059323 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:57.558414 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:58.058938 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:58.558617 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:59.059283 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:59.574886 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:37:00.059290 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:37:00.558331 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:37:01.058314 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:37:01.061314 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:37:01.061337 2022781 kapi.go:108] duration metric: took 6m0.046443715s to wait for kubernetes.io/minikube-addons=registry ...
	W0813 03:37:01.061452 2022781 out.go:242] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: timed out waiting for the condition]
	I0813 03:37:01.063789 2022781 out.go:177] * Enabled addons: metrics-server, default-storageclass, storage-provisioner, olm, volumesnapshots, gcp-auth, ingress, csi-hostpath-driver
	I0813 03:37:01.063812 2022781 addons.go:344] enableAddons completed in 6m3.870880492s
	I0813 03:37:01.394020 2022781 start.go:462] kubectl: 1.21.3, cluster: 1.21.3 (minor skew: 0)
	I0813 03:37:01.396720 2022781 out.go:177] * Done! kubectl is now configured to use "addons-20210813032940-2022292" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	8fb59a968f507       60dc18151daf8       1 second ago        Exited              registry-proxy            9                   0f0d8cb5ccd61
	463e949a986b5       d544402579747       4 seconds ago       Exited              catalog-operator          9                   4f10e3f5836dd
	5e5d9abdcdbf4       d544402579747       4 minutes ago       Exited              olm-operator              8                   c13c7bddfc538
	7b688e645fd22       1611cd07b61d5       11 minutes ago      Running             busybox                   0                   335b10c73edd4
	eab8e5f488bb8       357aab9e21a8d       15 minutes ago      Running             registry                  0                   e960d2aa7dc02
	92c18b0912a62       bac9ddccb0c70       20 minutes ago      Running             controller                0                   7b09157779528
	95dcf4a47993d       a883f7fc35610       20 minutes ago      Exited              patch                     0                   537ace0ace14b
	e912ae66fde6f       a883f7fc35610       20 minutes ago      Exited              create                    0                   02f6733c69e7f
	76df34c67e4d8       1a1f05a2cd7c2       21 minutes ago      Running             coredns                   0                   4013fbad24448
	f251119960206       ba04bb24b9575       21 minutes ago      Running             storage-provisioner       0                   4925d0c76d0fb
	b57e0dbb56f13       4ea38350a1beb       22 minutes ago      Running             kube-proxy                0                   77766a5e4eba5
	e811021829de7       f37b7c809e5dc       22 minutes ago      Running             kindnet-cni               0                   84d8cbe537f13
	fb47330aab572       cb310ff289d79       22 minutes ago      Running             kube-controller-manager   0                   c35a71b0e178a
	e34ccd1276019       44a6d50ef170d       22 minutes ago      Running             kube-apiserver            0                   802bb6c418a36
	3c1ce4b5f6d51       05b738aa1bc63       22 minutes ago      Running             etcd                      0                   a0068af440460
	ecb0d384c34ed       31a3b96cefc1e       22 minutes ago      Running             kube-scheduler            0                   6d9eb8373b6c3
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-08-13 03:29:47 UTC, end at Fri 2021-08-13 03:53:01 UTC. --
	Aug 13 03:50:25 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:50:25.737892855Z" level=error msg="PullImage \"nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 13 03:51:45 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:51:45.841326618Z" level=info msg="PullImage \"nginx:alpine\""
	Aug 13 03:51:46 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:51:46.851204083Z" level=error msg="PullImage \"nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:93be99beb7ac44e27734270778f5a32b7484d1acadbac0a1a33ab100c8b6d5be: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 13 03:52:56 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:52:56.842690557Z" level=info msg="CreateContainer within sandbox \"4f10e3f5836dd35a4a061400fffc8820989affa25e9c1a978c9f159e363bbfd0\" for container &ContainerMetadata{Name:catalog-operator,Attempt:9,}"
	Aug 13 03:52:56 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:52:56.866227462Z" level=info msg="CreateContainer within sandbox \"4f10e3f5836dd35a4a061400fffc8820989affa25e9c1a978c9f159e363bbfd0\" for &ContainerMetadata{Name:catalog-operator,Attempt:9,} returns container id \"463e949a986b547018a689cc770f8b37fb022c787272282558b29d3e999f5cad\""
	Aug 13 03:52:56 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:52:56.866660133Z" level=info msg="StartContainer for \"463e949a986b547018a689cc770f8b37fb022c787272282558b29d3e999f5cad\""
	Aug 13 03:52:56 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:52:56.938728078Z" level=info msg="Finish piping stderr of container \"463e949a986b547018a689cc770f8b37fb022c787272282558b29d3e999f5cad\""
	Aug 13 03:52:56 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:52:56.938759528Z" level=info msg="Finish piping stdout of container \"463e949a986b547018a689cc770f8b37fb022c787272282558b29d3e999f5cad\""
	Aug 13 03:52:56 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:52:56.943428974Z" level=info msg="StartContainer for \"463e949a986b547018a689cc770f8b37fb022c787272282558b29d3e999f5cad\" returns successfully"
	Aug 13 03:52:56 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:52:56.943559263Z" level=info msg="TaskExit event &TaskExit{ContainerID:463e949a986b547018a689cc770f8b37fb022c787272282558b29d3e999f5cad,ID:463e949a986b547018a689cc770f8b37fb022c787272282558b29d3e999f5cad,Pid:20135,ExitStatus:1,ExitedAt:2021-08-13 03:52:56.940255315 +0000 UTC,XXX_unrecognized:[],}"
	Aug 13 03:52:56 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:52:56.974861284Z" level=info msg="shim disconnected" id=463e949a986b547018a689cc770f8b37fb022c787272282558b29d3e999f5cad
	Aug 13 03:52:56 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:52:56.974922420Z" level=error msg="copy shim log" error="read /proc/self/fd/105: file already closed"
	Aug 13 03:52:57 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:52:57.172457418Z" level=info msg="RemoveContainer for \"dfbff09d21c8207a6be16d184937fac82ac6502f707c212889ae58cd3889bb4f\""
	Aug 13 03:52:57 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:52:57.178722391Z" level=info msg="RemoveContainer for \"dfbff09d21c8207a6be16d184937fac82ac6502f707c212889ae58cd3889bb4f\" returns successfully"
	Aug 13 03:52:59 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:52:59.842270205Z" level=info msg="CreateContainer within sandbox \"0f0d8cb5ccd6164da06bf372175975bcfa4f540d81668e9a80f7b62293605db9\" for container &ContainerMetadata{Name:registry-proxy,Attempt:9,}"
	Aug 13 03:52:59 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:52:59.863885505Z" level=info msg="CreateContainer within sandbox \"0f0d8cb5ccd6164da06bf372175975bcfa4f540d81668e9a80f7b62293605db9\" for &ContainerMetadata{Name:registry-proxy,Attempt:9,} returns container id \"8fb59a968f507d956797d3e3d192a2957b4c7f33700a830bd38252fb6b58b9b1\""
	Aug 13 03:52:59 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:52:59.864256169Z" level=info msg="StartContainer for \"8fb59a968f507d956797d3e3d192a2957b4c7f33700a830bd38252fb6b58b9b1\""
	Aug 13 03:52:59 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:52:59.935202168Z" level=info msg="Finish piping stderr of container \"8fb59a968f507d956797d3e3d192a2957b4c7f33700a830bd38252fb6b58b9b1\""
	Aug 13 03:52:59 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:52:59.935285376Z" level=info msg="Finish piping stdout of container \"8fb59a968f507d956797d3e3d192a2957b4c7f33700a830bd38252fb6b58b9b1\""
	Aug 13 03:52:59 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:52:59.940551011Z" level=info msg="StartContainer for \"8fb59a968f507d956797d3e3d192a2957b4c7f33700a830bd38252fb6b58b9b1\" returns successfully"
	Aug 13 03:52:59 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:52:59.940702731Z" level=info msg="TaskExit event &TaskExit{ContainerID:8fb59a968f507d956797d3e3d192a2957b4c7f33700a830bd38252fb6b58b9b1,ID:8fb59a968f507d956797d3e3d192a2957b4c7f33700a830bd38252fb6b58b9b1,Pid:20194,ExitStatus:1,ExitedAt:2021-08-13 03:52:59.937460847 +0000 UTC,XXX_unrecognized:[],}"
	Aug 13 03:52:59 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:52:59.963962100Z" level=info msg="shim disconnected" id=8fb59a968f507d956797d3e3d192a2957b4c7f33700a830bd38252fb6b58b9b1
	Aug 13 03:52:59 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:52:59.964009705Z" level=error msg="copy shim log" error="read /proc/self/fd/105: file already closed"
	Aug 13 03:53:00 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:53:00.177700031Z" level=info msg="RemoveContainer for \"9e12a8c2bdf148fa02bf682b582c4263d299d599d5c7760556fb4b568128ba59\""
	Aug 13 03:53:00 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:53:00.183700258Z" level=info msg="RemoveContainer for \"9e12a8c2bdf148fa02bf682b582c4263d299d599d5c7760556fb4b568128ba59\" returns successfully"
	
	* 
	* ==> coredns [76df34c67e4d8322520aef9bf10ede58a153c61504ac00600249a57066b783fd] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/arm64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               addons-20210813032940-2022292
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-20210813032940-2022292
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dc1c3ca26e9449ce488a773126b8450402c94a19
	                    minikube.k8s.io/name=addons-20210813032940-2022292
	                    minikube.k8s.io/updated_at=2021_08_13T03_30_43_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-20210813032940-2022292
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 03:30:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-20210813032940-2022292
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 03:53:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 03:48:48 +0000   Fri, 13 Aug 2021 03:30:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 03:48:48 +0000   Fri, 13 Aug 2021 03:30:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 03:48:48 +0000   Fri, 13 Aug 2021 03:30:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 03:48:48 +0000   Fri, 13 Aug 2021 03:31:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-20210813032940-2022292
	Capacity:
	  cpu:                2
	  ephemeral-storage:  40474572Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033460Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  40474572Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033460Ki
	  pods:               110
	System Info:
	  Machine ID:                 80c525a0c99c4bf099c0cbf9c365b032
	  System UUID:                cd349576-1400-4f29-881c-2488bb4cb8bc
	  Boot ID:                    0b91f2d0-31de-4b03-9973-67e3d0024ffb
	  Kernel Version:             5.8.0-1041-aws
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.4.6
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  default                     nginx                                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m1s
	  ingress-nginx               ingress-nginx-controller-59b45fb494-2m89h                100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (1%!)(MISSING)        0 (0%!)(MISSING)         22m
	  kube-system                 coredns-558bd4d5db-69x4l                                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     22m
	  kube-system                 etcd-addons-20210813032940-2022292                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 kindnet-6qhgq                                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      22m
	  kube-system                 kube-apiserver-addons-20210813032940-2022292             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-controller-manager-addons-20210813032940-2022292    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-9knsw                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-scheduler-addons-20210813032940-2022292             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 registry-5f6m6                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 registry-proxy-dg8n7                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 storage-provisioner                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  olm                         catalog-operator-75d496484d-xh6n8                        10m (0%!)(MISSING)      0 (0%!)(MISSING)      80Mi (1%!)(MISSING)        0 (0%!)(MISSING)         21m
	  olm                         olm-operator-859c88c96-whcps                             10m (0%!)(MISSING)      0 (0%!)(MISSING)      160Mi (2%!)(MISSING)       0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                970m (48%!)(MISSING)  100m (5%!)(MISSING)
	  memory             550Mi (7%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 22m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x5 over 22m)  kubelet     Node addons-20210813032940-2022292 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x4 over 22m)  kubelet     Node addons-20210813032940-2022292 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x4 over 22m)  kubelet     Node addons-20210813032940-2022292 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 22m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m                kubelet     Node addons-20210813032940-2022292 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet     Node addons-20210813032940-2022292 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m                kubelet     Node addons-20210813032940-2022292 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 22m                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                21m                kubelet     Node addons-20210813032940-2022292 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Aug13 02:55] systemd-journald[174]: Failed to send stream file descriptor to service manager: Connection refused
	
	* 
	* ==> etcd [3c1ce4b5f6d51d1115ca194a201f0f2319ae8953a147cd7517cafbb1fa677e11] <==
	* 2021-08-13 03:49:12.903868 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:49:22.904184 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:49:32.904258 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:49:42.903490 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:49:52.903824 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:50:02.903878 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:50:12.903699 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:50:22.903245 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:50:32.903787 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:50:34.462552 I | mvcc: store.index: compact 2518
	2021-08-13 03:50:34.477363 I | mvcc: finished scheduled compaction at 2518 (took 14.242166ms)
	2021-08-13 03:50:42.903538 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:50:52.904010 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:51:02.903991 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:51:12.903301 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:51:22.904061 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:51:32.903401 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:51:42.903422 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:51:52.904243 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:52:02.903725 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:52:12.903284 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:52:22.903574 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:52:32.903442 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:52:42.903627 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:52:52.903833 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  03:53:02 up 13:35,  0 users,  load average: 0.48, 0.31, 0.96
	Linux addons-20210813032940-2022292 5.8.0-1041-aws #43~20.04.1-Ubuntu SMP Thu Jul 15 11:03:27 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [e34ccd127601990a58a3b0bf970ea62cbd1f776d1e31437a7826d21db94646c3] <==
	* I0813 03:48:32.684745       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 03:48:32.684767       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0813 03:48:54.115512       1 cacher.go:148] Terminating all watchers from cacher *unstructured.Unstructured
	W0813 03:48:54.214807       1 cacher.go:148] Terminating all watchers from cacher *unstructured.Unstructured
	W0813 03:48:54.218625       1 cacher.go:148] Terminating all watchers from cacher *unstructured.Unstructured
	I0813 03:48:59.700255       1 controller.go:611] quota admission added evaluator for: ingresses.networking.k8s.io
	I0813 03:49:09.310642       1 client.go:360] parsed scheme: "passthrough"
	I0813 03:49:09.310680       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 03:49:09.310769       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 03:49:14.063536       1 controller.go:132] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0813 03:49:51.817284       1 client.go:360] parsed scheme: "passthrough"
	I0813 03:49:51.817346       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 03:49:51.817360       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 03:50:33.067452       1 client.go:360] parsed scheme: "passthrough"
	I0813 03:50:33.067497       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 03:50:33.067505       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 03:51:13.320726       1 client.go:360] parsed scheme: "passthrough"
	I0813 03:51:13.320763       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 03:51:13.320771       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 03:51:51.533253       1 client.go:360] parsed scheme: "passthrough"
	I0813 03:51:51.533294       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 03:51:51.533302       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 03:52:34.702054       1 client.go:360] parsed scheme: "passthrough"
	I0813 03:52:34.702088       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 03:52:34.702097       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [fb47330aab572a8f0160ba80c3c0231858b45b60f6be68e94341c50ea5e9a377] <==
	* E0813 03:48:57.735201       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 03:48:58.566122       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 03:49:00.507413       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 03:49:01.350263       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 03:49:03.154230       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 03:49:10.307301       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 03:49:10.399735       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 03:49:10.510769       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 03:49:27.742726       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 03:49:33.035106       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 03:49:35.652272       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 03:49:55.058751       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 03:50:17.128597       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 03:50:18.171684       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 03:50:27.540930       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 03:50:53.330387       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 03:51:09.991887       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 03:51:17.412505       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 03:51:30.997314       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 03:52:05.282532       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 03:52:07.032983       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 03:52:07.886273       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 03:52:40.151770       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 03:52:47.433138       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 03:53:00.458108       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [b57e0dbb56f139a4e11ed434de0567755dd4ffdb08231f247c678c04a27d72ea] <==
	* I0813 03:30:58.916090       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0813 03:30:58.916147       1 server_others.go:140] Detected node IP 192.168.49.2
	W0813 03:30:58.916169       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0813 03:30:59.027458       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0813 03:30:59.027492       1 server_others.go:212] Using iptables Proxier.
	I0813 03:30:59.027502       1 server_others.go:219] creating dualStackProxier for iptables.
	W0813 03:30:59.027515       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0813 03:30:59.027867       1 server.go:643] Version: v1.21.3
	I0813 03:30:59.037124       1 config.go:315] Starting service config controller
	I0813 03:30:59.037136       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 03:30:59.037153       1 config.go:224] Starting endpoint slice config controller
	I0813 03:30:59.037156       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 03:30:59.040531       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 03:30:59.051028       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 03:30:59.141975       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0813 03:30:59.142026       1 shared_informer.go:247] Caches are synced for service config 
	W0813 03:36:56.043703       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 03:45:52.045929       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	
	* 
	* ==> kube-scheduler [ecb0d384c34ed158fd62ade4f570a1ed8cfaca3ee6c53ab12a2de2f06720576b] <==
	* W0813 03:30:39.532366       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0813 03:30:39.532414       1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0813 03:30:39.532440       1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0813 03:30:39.532453       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0813 03:30:39.630521       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 03:30:39.630629       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 03:30:39.634320       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0813 03:30:39.634682       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0813 03:30:39.656905       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 03:30:39.657167       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 03:30:39.657219       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 03:30:39.657322       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 03:30:39.657373       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 03:30:39.657425       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 03:30:39.657469       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 03:30:39.657522       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 03:30:39.657586       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 03:30:39.657632       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 03:30:39.657673       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 03:30:39.657783       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 03:30:39.660525       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 03:30:39.673014       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 03:30:40.516857       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 03:30:40.593078       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0813 03:30:40.931616       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 03:29:47 UTC, end at Fri 2021-08-13 03:53:02 UTC. --
	Aug 13 03:52:30 addons-20210813032940-2022292 kubelet[1185]: I0813 03:52:30.842165    1185 scope.go:111] "RemoveContainer" containerID="dfbff09d21c8207a6be16d184937fac82ac6502f707c212889ae58cd3889bb4f"
	Aug 13 03:52:30 addons-20210813032940-2022292 kubelet[1185]: E0813 03:52:30.842504    1185 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-xh6n8_olm(2a58a6fd-48ea-44a7-884d-f814b730c87a)\"" pod="olm/catalog-operator-75d496484d-xh6n8" podUID=2a58a6fd-48ea-44a7-884d-f814b730c87a
	Aug 13 03:52:32 addons-20210813032940-2022292 kubelet[1185]: I0813 03:52:32.841189    1185 scope.go:111] "RemoveContainer" containerID="9e12a8c2bdf148fa02bf682b582c4263d299d599d5c7760556fb4b568128ba59"
	Aug 13 03:52:32 addons-20210813032940-2022292 kubelet[1185]: E0813 03:52:32.841811    1185 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=registry-proxy pod=registry-proxy-dg8n7_kube-system(3af763b1-1ec2-4a5b-b1c6-9f9a9c21d031)\"" pod="kube-system/registry-proxy-dg8n7" podUID=3af763b1-1ec2-4a5b-b1c6-9f9a9c21d031
	Aug 13 03:52:39 addons-20210813032940-2022292 kubelet[1185]: E0813 03:52:39.840906    1185 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:alpine\\\"\"" pod="default/nginx" podUID=15d8912a-aaaa-4e7f-9212-a8819a810920
	Aug 13 03:52:41 addons-20210813032940-2022292 kubelet[1185]: I0813 03:52:41.840880    1185 scope.go:111] "RemoveContainer" containerID="dfbff09d21c8207a6be16d184937fac82ac6502f707c212889ae58cd3889bb4f"
	Aug 13 03:52:41 addons-20210813032940-2022292 kubelet[1185]: E0813 03:52:41.841246    1185 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-xh6n8_olm(2a58a6fd-48ea-44a7-884d-f814b730c87a)\"" pod="olm/catalog-operator-75d496484d-xh6n8" podUID=2a58a6fd-48ea-44a7-884d-f814b730c87a
	Aug 13 03:52:42 addons-20210813032940-2022292 kubelet[1185]: I0813 03:52:42.840514    1185 scope.go:111] "RemoveContainer" containerID="5e5d9abdcdbf491571a80ec398090a2e9db048e0414e9d47e2678186d7788c72"
	Aug 13 03:52:42 addons-20210813032940-2022292 kubelet[1185]: E0813 03:52:42.840912    1185 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=olm-operator pod=olm-operator-859c88c96-whcps_olm(9dfb17b5-db48-44a1-8daf-33ce6de73034)\"" pod="olm/olm-operator-859c88c96-whcps" podUID=9dfb17b5-db48-44a1-8daf-33ce6de73034
	Aug 13 03:52:47 addons-20210813032940-2022292 kubelet[1185]: I0813 03:52:47.841278    1185 scope.go:111] "RemoveContainer" containerID="9e12a8c2bdf148fa02bf682b582c4263d299d599d5c7760556fb4b568128ba59"
	Aug 13 03:52:47 addons-20210813032940-2022292 kubelet[1185]: E0813 03:52:47.841566    1185 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=registry-proxy pod=registry-proxy-dg8n7_kube-system(3af763b1-1ec2-4a5b-b1c6-9f9a9c21d031)\"" pod="kube-system/registry-proxy-dg8n7" podUID=3af763b1-1ec2-4a5b-b1c6-9f9a9c21d031
	Aug 13 03:52:50 addons-20210813032940-2022292 kubelet[1185]: E0813 03:52:50.841710    1185 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:alpine\\\"\"" pod="default/nginx" podUID=15d8912a-aaaa-4e7f-9212-a8819a810920
	Aug 13 03:52:55 addons-20210813032940-2022292 kubelet[1185]: I0813 03:52:55.840229    1185 scope.go:111] "RemoveContainer" containerID="5e5d9abdcdbf491571a80ec398090a2e9db048e0414e9d47e2678186d7788c72"
	Aug 13 03:52:55 addons-20210813032940-2022292 kubelet[1185]: E0813 03:52:55.840636    1185 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=olm-operator pod=olm-operator-859c88c96-whcps_olm(9dfb17b5-db48-44a1-8daf-33ce6de73034)\"" pod="olm/olm-operator-859c88c96-whcps" podUID=9dfb17b5-db48-44a1-8daf-33ce6de73034
	Aug 13 03:52:56 addons-20210813032940-2022292 kubelet[1185]: I0813 03:52:56.840595    1185 scope.go:111] "RemoveContainer" containerID="dfbff09d21c8207a6be16d184937fac82ac6502f707c212889ae58cd3889bb4f"
	Aug 13 03:52:57 addons-20210813032940-2022292 kubelet[1185]: I0813 03:52:57.167065    1185 scope.go:111] "RemoveContainer" containerID="dfbff09d21c8207a6be16d184937fac82ac6502f707c212889ae58cd3889bb4f"
	Aug 13 03:52:57 addons-20210813032940-2022292 kubelet[1185]: I0813 03:52:57.167315    1185 scope.go:111] "RemoveContainer" containerID="463e949a986b547018a689cc770f8b37fb022c787272282558b29d3e999f5cad"
	Aug 13 03:52:57 addons-20210813032940-2022292 kubelet[1185]: E0813 03:52:57.167674    1185 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-xh6n8_olm(2a58a6fd-48ea-44a7-884d-f814b730c87a)\"" pod="olm/catalog-operator-75d496484d-xh6n8" podUID=2a58a6fd-48ea-44a7-884d-f814b730c87a
	Aug 13 03:52:58 addons-20210813032940-2022292 kubelet[1185]: W0813 03:52:58.414166    1185 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods/burstable/pod2a58a6fd-48ea-44a7-884d-f814b730c87a/463e949a986b547018a689cc770f8b37fb022c787272282558b29d3e999f5cad WatchSource:0}: task 463e949a986b547018a689cc770f8b37fb022c787272282558b29d3e999f5cad not found: not found
	Aug 13 03:52:59 addons-20210813032940-2022292 kubelet[1185]: I0813 03:52:59.840303    1185 scope.go:111] "RemoveContainer" containerID="9e12a8c2bdf148fa02bf682b582c4263d299d599d5c7760556fb4b568128ba59"
	Aug 13 03:53:00 addons-20210813032940-2022292 kubelet[1185]: I0813 03:53:00.176261    1185 scope.go:111] "RemoveContainer" containerID="9e12a8c2bdf148fa02bf682b582c4263d299d599d5c7760556fb4b568128ba59"
	Aug 13 03:53:00 addons-20210813032940-2022292 kubelet[1185]: I0813 03:53:00.176610    1185 scope.go:111] "RemoveContainer" containerID="8fb59a968f507d956797d3e3d192a2957b4c7f33700a830bd38252fb6b58b9b1"
	Aug 13 03:53:00 addons-20210813032940-2022292 kubelet[1185]: E0813 03:53:00.176859    1185 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=registry-proxy pod=registry-proxy-dg8n7_kube-system(3af763b1-1ec2-4a5b-b1c6-9f9a9c21d031)\"" pod="kube-system/registry-proxy-dg8n7" podUID=3af763b1-1ec2-4a5b-b1c6-9f9a9c21d031
	Aug 13 03:53:01 addons-20210813032940-2022292 kubelet[1185]: W0813 03:53:01.414415    1185 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods/besteffort/pod3af763b1-1ec2-4a5b-b1c6-9f9a9c21d031/8fb59a968f507d956797d3e3d192a2957b4c7f33700a830bd38252fb6b58b9b1 WatchSource:0}: task 8fb59a968f507d956797d3e3d192a2957b4c7f33700a830bd38252fb6b58b9b1 not found: not found
	Aug 13 03:53:01 addons-20210813032940-2022292 kubelet[1185]: E0813 03:53:01.842738    1185 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:alpine\\\"\"" pod="default/nginx" podUID=15d8912a-aaaa-4e7f-9212-a8819a810920
	
	* 
	* ==> storage-provisioner [f251119960206b400f3c5fa69a1475ae4017f1585aedfcae1ba3a5e41c29eaca] <==
	* I0813 03:31:53.616300       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 03:31:53.654387       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 03:31:53.654428       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 03:31:53.684054       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 03:31:53.688430       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-20210813032940-2022292_487120fb-5274-456a-8b7e-33f90e734a44!
	I0813 03:31:53.696297       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e7447bc9-2c2e-4fe9-978d-7328239a1c68", APIVersion:"v1", ResourceVersion:"1029", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-20210813032940-2022292_487120fb-5274-456a-8b7e-33f90e734a44 became leader
	I0813 03:31:53.792536       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-20210813032940-2022292_487120fb-5274-456a-8b7e-33f90e734a44!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-20210813032940-2022292 -n addons-20210813032940-2022292
helpers_test.go:262: (dbg) Run:  kubectl --context addons-20210813032940-2022292 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: nginx ingress-nginx-admission-create-r7rsv ingress-nginx-admission-patch-2wdhx
helpers_test.go:273: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context addons-20210813032940-2022292 describe pod nginx ingress-nginx-admission-create-r7rsv ingress-nginx-admission-patch-2wdhx
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context addons-20210813032940-2022292 describe pod nginx ingress-nginx-admission-create-r7rsv ingress-nginx-admission-patch-2wdhx: exit status 1 (98.439837ms)

                                                
                                                
-- stdout --
	Name:         nginx
	Namespace:    default
	Priority:     0
	Node:         addons-20210813032940-2022292/192.168.49.2
	Start Time:   Fri, 13 Aug 2021 03:49:00 +0000
	Labels:       run=nginx
	Annotations:  <none>
	Status:       Pending
	IP:           10.244.0.25
	IPs:
	  IP:  10.244.0.25
	Containers:
	  nginx:
	    Container ID:   
	    Image:          nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n6rkg (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  kube-api-access-n6rkg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  4m2s                  default-scheduler  Successfully assigned default/nginx to addons-20210813032940-2022292
	  Warning  Failed     3m48s                 kubelet            Failed to pull image "nginx:alpine": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:93be99beb7ac44e27734270778f5a32b7484d1acadbac0a1a33ab100c8b6d5be: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m38s (x4 over 4m2s)  kubelet            Pulling image "nginx:alpine"
	  Warning  Failed     2m37s (x3 over 4m1s)  kubelet            Failed to pull image "nginx:alpine": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m37s (x4 over 4m1s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m13s (x6 over 4m1s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    2m1s (x7 over 4m1s)   kubelet            Back-off pulling image "nginx:alpine"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-r7rsv" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-2wdhx" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context addons-20210813032940-2022292 describe pod nginx ingress-nginx-admission-create-r7rsv ingress-nginx-admission-patch-2wdhx: exit status 1
--- FAIL: TestAddons/parallel/Ingress (243.73s)

                                                
                                    
x
+
TestAddons/parallel/Olm (732.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:463: catalog-operator stabilized in 24.63764ms

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:467: olm-operator stabilized in 26.562439ms

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:469: failed waiting for packageserver deployment to stabilize: timed out waiting for the condition
addons_test.go:471: packageserver stabilized in 6m0.027551964s
addons_test.go:473: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=catalog-operator" in namespace "olm" ...
helpers_test.go:343: "catalog-operator-75d496484d-xh6n8" [2a58a6fd-48ea-44a7-884d-f814b730c87a] Running / Ready:ContainersNotReady (containers with unready status: [catalog-operator]) / ContainersReady:ContainersNotReady (containers with unready status: [catalog-operator])
addons_test.go:473: (dbg) TestAddons/parallel/Olm: app=catalog-operator healthy within 5.005707817s
addons_test.go:476: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=olm-operator" in namespace "olm" ...
helpers_test.go:343: "olm-operator-859c88c96-whcps" [9dfb17b5-db48-44a1-8daf-33ce6de73034] Running / Ready:ContainersNotReady (containers with unready status: [olm-operator]) / ContainersReady:ContainersNotReady (containers with unready status: [olm-operator])
addons_test.go:476: (dbg) TestAddons/parallel/Olm: app=olm-operator healthy within 5.005338438s
addons_test.go:479: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=packageserver" in namespace "olm" ...

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:479: ***** TestAddons/parallel/Olm: pod "app=packageserver" failed to start within 6m0s: timed out waiting for the condition ****
addons_test.go:479: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-20210813032940-2022292 -n addons-20210813032940-2022292
addons_test.go:479: TestAddons/parallel/Olm: showing logs for failed pods as of 2021-08-13 03:49:11.817092372 +0000 UTC m=+1249.829151565
addons_test.go:480: failed waiting for pod packageserver: app=packageserver within 6m0s: timed out waiting for the condition
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestAddons/parallel/Olm]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect addons-20210813032940-2022292
helpers_test.go:236: (dbg) docker inspect addons-20210813032940-2022292:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5eb115611cc3c203dd18e5bcf8bd911508a396ad77a4442055cb4b3d330b1212",
	        "Created": "2021-08-13T03:29:46.326395701Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2023217,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T03:29:46.770105005Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ba5ae658d5b3f017bdb597cc46a1912d5eed54239e31b777788d204fdcbc4445",
	        "ResolvConfPath": "/var/lib/docker/containers/5eb115611cc3c203dd18e5bcf8bd911508a396ad77a4442055cb4b3d330b1212/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5eb115611cc3c203dd18e5bcf8bd911508a396ad77a4442055cb4b3d330b1212/hostname",
	        "HostsPath": "/var/lib/docker/containers/5eb115611cc3c203dd18e5bcf8bd911508a396ad77a4442055cb4b3d330b1212/hosts",
	        "LogPath": "/var/lib/docker/containers/5eb115611cc3c203dd18e5bcf8bd911508a396ad77a4442055cb4b3d330b1212/5eb115611cc3c203dd18e5bcf8bd911508a396ad77a4442055cb4b3d330b1212-json.log",
	        "Name": "/addons-20210813032940-2022292",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-20210813032940-2022292:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-20210813032940-2022292",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a84c1c1faf655e022ea96b0cbfa5780f6b48eafe43cd340b60f563833149b80f-init/diff:/var/lib/docker/overlay2/7eab3572859d93b266e01c53f7180a9b812a9352d6d9de9a250b7c08853896bd/diff:/var/lib/docker/overlay2/735c75d71cfc18e90e119a4cbda44b5328f80ee140097a56e4b8d56d1d73296a/diff:/var/lib/docker/overlay2/a3e21a33abd0bc635f6c01d5065127b0c6ae8648e27621bc2af8480371e0e000/diff:/var/lib/docker/overlay2/81573b84b43b2908098dbf411f4127aea8745e37aa0ee2f3bcf32f2378aef923/diff:/var/lib/docker/overlay2/633406c91e496c6ee40740050d85641e9c1f2bf787ba64a82f892910362ceeb3/diff:/var/lib/docker/overlay2/deb8d862aaef5e3fc2ec77b3f1839b07c4f6998399f4f111cd38226c004f70b0/diff:/var/lib/docker/overlay2/57b3638e691861d96d431a19402174c1139d2ff0280c08c71a81a8fcf9390e79/diff:/var/lib/docker/overlay2/6e43f99fe3b29b8ef7a4f065a75009878de2e2c2f4298c42eaf887f7602bbc6e/diff:/var/lib/docker/overlay2/cf9d28926b8190588c7af7d8b25156aee75f2abd04071b6e2a0a0fbf2e143dee/diff:/var/lib/docker/overlay2/6aa317
1af6f20f0682732cc4019152e4d5b0846e1ebda0a27c41c772e1cde011/diff:/var/lib/docker/overlay2/868a81f13eb2fedd1a1cb40eaf1c94ba3507a2ce88acff3fbbe9324b52a4b161/diff:/var/lib/docker/overlay2/162214348b4cea5219287565f6d7e0dd459b26bcc50e3db36cf72c667b547528/diff:/var/lib/docker/overlay2/9dbad12bae2f76b71152f7b4515e05d4b998ecec3e6ee896abcec7a80dcd2bea/diff:/var/lib/docker/overlay2/6cabd7857a22f00b0aba07331d6ccd89db9770531c0aa2f6fe5dd0f2cfdf0571/diff:/var/lib/docker/overlay2/d37830ed714a3f12f75bdb0787ab6a0b95fa84f6f2ba7cfce7c0088eae46490b/diff:/var/lib/docker/overlay2/d1f89b0ec8b42bfa6422a1c60a32bf10de45dc549f369f5a7cab728a58edc9f6/diff:/var/lib/docker/overlay2/23f19b760877b914dfe08fbc57f540b6d7a01f94b06b51f27fd6b0307358f0c7/diff:/var/lib/docker/overlay2/a5a77daab231d8d9f6bccde006a207ac55eba70f1221af6acf584668b6732875/diff:/var/lib/docker/overlay2/8d8735d77324b45253a6a19c95ccc69efbb75db0817acd436b005907edf2edcf/diff:/var/lib/docker/overlay2/a7baa651956578e18a5f1b4650eb08a3fde481426f62eca9488d43b89516af4a/diff:/var/lib/d
ocker/overlay2/bce892b3b410ea92f44fedfdc2ee2fa21cfd1fb09da0f3f710f4127436dee1da/diff:/var/lib/docker/overlay2/5fd9b1d93e98bad37f9fb94802b81ef99b54fe312c33006d1efe3e0a4d018218/diff:/var/lib/docker/overlay2/4fa01f36ea63b13ec54182dc384831ff6ba4af27e4e0af13a679984676a4444c/diff:/var/lib/docker/overlay2/63fcd873b6d3120225858a1625cd3b62111df43d3ee0a5fc67083b6912d73a0b/diff:/var/lib/docker/overlay2/2a89e5c9c4b59c0940b10344a4b9bcc69aa162cbdaff6b115404618622a39bf7/diff:/var/lib/docker/overlay2/f08c2886bdfdaf347184cfc06f22457c321676b0bed884791f82f2e3871b640d/diff:/var/lib/docker/overlay2/2f28445803213dc1a6a1b2c687d83ad65dbc018184c663d1f55aa1e8ba26c71c/diff:/var/lib/docker/overlay2/b380dc70af7cf929aaac54e718efbf169fc3994906ab4c15442ddcb1b9973044/diff:/var/lib/docker/overlay2/78fc6ffaa10b2fbce9cefb40ac36aad6ac1d9d90eb27a39dc3316a9c7925b6e9/diff:/var/lib/docker/overlay2/14ee7ddeeb1d52f6956390ca75ff1c67feb8f463a7590e4e021a61251ed42ace/diff:/var/lib/docker/overlay2/99b8cd45c95f310665f0002ff1e8a6932c40fe872e3daa332d0b6f0cc41
f09f7/diff:/var/lib/docker/overlay2/efc742edfe683b14be0e72910049a54bf7b14ac798aa52a5e0f2839e1192b382/diff:/var/lib/docker/overlay2/d038d2ed6aff52af29d17eeb4de8728511045dbe49430059212877f1ae82f24b/diff:/var/lib/docker/overlay2/413fdf0e0da33dff95cacfd58fb4d7eb00b56c1777905c5671426293e1236f21/diff:/var/lib/docker/overlay2/88c5007e3d3e219079cebf81af5c22026c5923305801eacb5affe25b84906e7f/diff:/var/lib/docker/overlay2/e989119af87381d107830638584e78f0bf616a31754948372e177ffcdfb821fb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a84c1c1faf655e022ea96b0cbfa5780f6b48eafe43cd340b60f563833149b80f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a84c1c1faf655e022ea96b0cbfa5780f6b48eafe43cd340b60f563833149b80f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a84c1c1faf655e022ea96b0cbfa5780f6b48eafe43cd340b60f563833149b80f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-20210813032940-2022292",
	                "Source": "/var/lib/docker/volumes/addons-20210813032940-2022292/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-20210813032940-2022292",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-20210813032940-2022292",
	                "name.minikube.sigs.k8s.io": "addons-20210813032940-2022292",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "63cc91236c7d0216218ed6a99d16bf5a5214d1f2a29fe790b354ed1c3d95269a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50803"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50802"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50799"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50801"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50800"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/63cc91236c7d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-20210813032940-2022292": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5eb115611cc3",
	                        "addons-20210813032940-2022292"
	                    ],
	                    "NetworkID": "1437cc990d89cd4c2f4b86b77c1e915486671cda7aa7c792c2322229d169e87c",
	                    "EndpointID": "96d417aa7e8c5077c1e5d843cea177ffb9c204a83528a1fa41771ba12d8e11cc",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-20210813032940-2022292 -n addons-20210813032940-2022292
helpers_test.go:245: <<< TestAddons/parallel/Olm FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestAddons/parallel/Olm]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p addons-20210813032940-2022292 logs -n 25
helpers_test.go:253: TestAddons/parallel/Olm logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                  Args                  |                Profile                 |  User   | Version |          Start Time           |           End Time            |
	|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | --all                                  | download-only-20210813032822-2022292   | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:29:26 UTC | Fri, 13 Aug 2021 03:29:26 UTC |
	| delete  | -p                                     | download-only-20210813032822-2022292   | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:29:26 UTC | Fri, 13 Aug 2021 03:29:26 UTC |
	|         | download-only-20210813032822-2022292   |                                        |         |         |                               |                               |
	| delete  | -p                                     | download-only-20210813032822-2022292   | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:29:26 UTC | Fri, 13 Aug 2021 03:29:26 UTC |
	|         | download-only-20210813032822-2022292   |                                        |         |         |                               |                               |
	| delete  | -p                                     | download-docker-20210813032926-2022292 | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:29:40 UTC | Fri, 13 Aug 2021 03:29:40 UTC |
	|         | download-docker-20210813032926-2022292 |                                        |         |         |                               |                               |
	| start   | -p                                     | addons-20210813032940-2022292          | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:29:40 UTC | Fri, 13 Aug 2021 03:37:01 UTC |
	|         | addons-20210813032940-2022292          |                                        |         |         |                               |                               |
	|         | --wait=true --memory=4000              |                                        |         |         |                               |                               |
	|         | --alsologtostderr                      |                                        |         |         |                               |                               |
	|         | --addons=registry                      |                                        |         |         |                               |                               |
	|         | --addons=metrics-server                |                                        |         |         |                               |                               |
	|         | --addons=olm                           |                                        |         |         |                               |                               |
	|         | --addons=volumesnapshots               |                                        |         |         |                               |                               |
	|         | --addons=csi-hostpath-driver           |                                        |         |         |                               |                               |
	|         | --driver=docker                        |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	|         | --addons=ingress                       |                                        |         |         |                               |                               |
	|         | --addons=gcp-auth                      |                                        |         |         |                               |                               |
	| -p      | addons-20210813032940-2022292          | addons-20210813032940-2022292          | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:39:08 UTC | Fri, 13 Aug 2021 03:39:08 UTC |
	|         | ip                                     |                                        |         |         |                               |                               |
	| -p      | addons-20210813032940-2022292          | addons-20210813032940-2022292          | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:41:44 UTC | Fri, 13 Aug 2021 03:41:44 UTC |
	|         | addons disable registry                |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1                 |                                        |         |         |                               |                               |
	| -p      | addons-20210813032940-2022292          | addons-20210813032940-2022292          | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:41:45 UTC | Fri, 13 Aug 2021 03:41:46 UTC |
	|         | logs -n 25                             |                                        |         |         |                               |                               |
	| -p      | addons-20210813032940-2022292          | addons-20210813032940-2022292          | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:41:57 UTC | Fri, 13 Aug 2021 03:42:24 UTC |
	|         | addons disable gcp-auth                |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1                 |                                        |         |         |                               |                               |
	| -p      | addons-20210813032940-2022292          | addons-20210813032940-2022292          | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:48:45 UTC | Fri, 13 Aug 2021 03:48:52 UTC |
	|         | addons disable                         |                                        |         |         |                               |                               |
	|         | csi-hostpath-driver                    |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1                 |                                        |         |         |                               |                               |
	| -p      | addons-20210813032940-2022292          | addons-20210813032940-2022292          | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:48:52 UTC | Fri, 13 Aug 2021 03:48:53 UTC |
	|         | addons disable volumesnapshots         |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1                 |                                        |         |         |                               |                               |
	| -p      | addons-20210813032940-2022292          | addons-20210813032940-2022292          | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:48:58 UTC | Fri, 13 Aug 2021 03:48:59 UTC |
	|         | addons disable metrics-server          |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1                 |                                        |         |         |                               |                               |
	|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 03:29:40
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.16.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 03:29:40.904577 2022781 out.go:298] Setting OutFile to fd 1 ...
	I0813 03:29:40.904648 2022781 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 03:29:40.904652 2022781 out.go:311] Setting ErrFile to fd 2...
	I0813 03:29:40.904656 2022781 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 03:29:40.904776 2022781 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0813 03:29:40.905029 2022781 out.go:305] Setting JSON to false
	I0813 03:29:40.905896 2022781 start.go:111] hostinfo: {"hostname":"ip-172-31-30-239","uptime":47525,"bootTime":1628777856,"procs":373,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0813 03:29:40.905961 2022781 start.go:121] virtualization:  
	I0813 03:29:40.908162 2022781 out.go:177] * [addons-20210813032940-2022292] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0813 03:29:40.910717 2022781 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 03:29:40.909322 2022781 notify.go:169] Checking for updates...
	I0813 03:29:40.912282 2022781 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 03:29:40.913989 2022781 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0813 03:29:40.915709 2022781 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0813 03:29:40.915862 2022781 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 03:29:40.950762 2022781 docker.go:132] docker version: linux-20.10.8
	I0813 03:29:40.950850 2022781 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 03:29:41.048943 2022781 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:19 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-13 03:29:40.991652348 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0813 03:29:41.049041 2022781 docker.go:244] overlay module found
	I0813 03:29:41.051223 2022781 out.go:177] * Using the docker driver based on user configuration
	I0813 03:29:41.051242 2022781 start.go:278] selected driver: docker
	I0813 03:29:41.051247 2022781 start.go:751] validating driver "docker" against <nil>
	I0813 03:29:41.051260 2022781 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 03:29:41.051298 2022781 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 03:29:41.051322 2022781 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 03:29:41.053106 2022781 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 03:29:41.053411 2022781 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 03:29:41.128846 2022781 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:19 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-13 03:29:41.078382939 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0813 03:29:41.128961 2022781 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 03:29:41.129117 2022781 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 03:29:41.129138 2022781 cni.go:93] Creating CNI manager for ""
	I0813 03:29:41.129145 2022781 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 03:29:41.129158 2022781 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0813 03:29:41.129163 2022781 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0813 03:29:41.129174 2022781 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0813 03:29:41.129183 2022781 start_flags.go:277] config:
	{Name:addons-20210813032940-2022292 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210813032940-2022292 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 03:29:41.131398 2022781 out.go:177] * Starting control plane node addons-20210813032940-2022292 in cluster addons-20210813032940-2022292
	I0813 03:29:41.131428 2022781 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0813 03:29:41.133269 2022781 out.go:177] * Pulling base image ...
	I0813 03:29:41.133290 2022781 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 03:29:41.133320 2022781 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4
	I0813 03:29:41.133338 2022781 cache.go:56] Caching tarball of preloaded images
	I0813 03:29:41.133463 2022781 preload.go:173] Found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0813 03:29:41.133484 2022781 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0813 03:29:41.133759 2022781 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/config.json ...
	I0813 03:29:41.133785 2022781 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/config.json: {Name:mk0d1eb11345f673782e67cee6dd1983fc2ade38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 03:29:41.133935 2022781 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0813 03:29:41.165619 2022781 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0813 03:29:41.165641 2022781 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0813 03:29:41.165654 2022781 cache.go:205] Successfully downloaded all kic artifacts
	I0813 03:29:41.165678 2022781 start.go:313] acquiring machines lock for addons-20210813032940-2022292: {Name:mk4b9c97c204520a15a5934e9d971902370f4475 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 03:29:41.165798 2022781 start.go:317] acquired machines lock for "addons-20210813032940-2022292" in 99.224µs
	I0813 03:29:41.165826 2022781 start.go:89] Provisioning new machine with config: &{Name:addons-20210813032940-2022292 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210813032940-2022292 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 03:29:41.165896 2022781 start.go:126] createHost starting for "" (driver="docker")
	I0813 03:29:41.168439 2022781 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0813 03:29:41.168667 2022781 start.go:160] libmachine.API.Create for "addons-20210813032940-2022292" (driver="docker")
	I0813 03:29:41.168697 2022781 client.go:168] LocalClient.Create starting
	I0813 03:29:41.168779 2022781 main.go:130] libmachine: Creating CA: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem
	I0813 03:29:41.457503 2022781 main.go:130] libmachine: Creating client certificate: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem
	I0813 03:29:42.069244 2022781 cli_runner.go:115] Run: docker network inspect addons-20210813032940-2022292 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0813 03:29:42.096969 2022781 cli_runner.go:162] docker network inspect addons-20210813032940-2022292 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0813 03:29:42.097037 2022781 network_create.go:255] running [docker network inspect addons-20210813032940-2022292] to gather additional debugging logs...
	I0813 03:29:42.097062 2022781 cli_runner.go:115] Run: docker network inspect addons-20210813032940-2022292
	W0813 03:29:42.123327 2022781 cli_runner.go:162] docker network inspect addons-20210813032940-2022292 returned with exit code 1
	I0813 03:29:42.123351 2022781 network_create.go:258] error running [docker network inspect addons-20210813032940-2022292]: docker network inspect addons-20210813032940-2022292: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: addons-20210813032940-2022292
	I0813 03:29:42.123365 2022781 network_create.go:260] output of [docker network inspect addons-20210813032940-2022292]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: addons-20210813032940-2022292
	
	** /stderr **
	I0813 03:29:42.123423 2022781 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 03:29:42.150055 2022781 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0x4000892220] misses:0}
	I0813 03:29:42.150105 2022781 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 03:29:42.150124 2022781 network_create.go:106] attempt to create docker network addons-20210813032940-2022292 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0813 03:29:42.150170 2022781 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true addons-20210813032940-2022292
	I0813 03:29:42.365897 2022781 network_create.go:90] docker network addons-20210813032940-2022292 192.168.49.0/24 created
	I0813 03:29:42.365924 2022781 kic.go:106] calculated static IP "192.168.49.2" for the "addons-20210813032940-2022292" container
	I0813 03:29:42.365989 2022781 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0813 03:29:42.392753 2022781 cli_runner.go:115] Run: docker volume create addons-20210813032940-2022292 --label name.minikube.sigs.k8s.io=addons-20210813032940-2022292 --label created_by.minikube.sigs.k8s.io=true
	I0813 03:29:42.465525 2022781 oci.go:102] Successfully created a docker volume addons-20210813032940-2022292
	I0813 03:29:42.465589 2022781 cli_runner.go:115] Run: docker run --rm --name addons-20210813032940-2022292-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210813032940-2022292 --entrypoint /usr/bin/test -v addons-20210813032940-2022292:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
	I0813 03:29:46.145957 2022781 cli_runner.go:168] Completed: docker run --rm --name addons-20210813032940-2022292-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210813032940-2022292 --entrypoint /usr/bin/test -v addons-20210813032940-2022292:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib: (3.680326113s)
	I0813 03:29:46.145978 2022781 oci.go:106] Successfully prepared a docker volume addons-20210813032940-2022292
	W0813 03:29:46.146006 2022781 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0813 03:29:46.146013 2022781 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0813 03:29:46.146068 2022781 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0813 03:29:46.146285 2022781 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 03:29:46.146304 2022781 kic.go:179] Starting extracting preloaded images to volume ...
	I0813 03:29:46.146345 2022781 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-20210813032940-2022292:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0813 03:29:46.276252 2022781 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-20210813032940-2022292 --name addons-20210813032940-2022292 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-20210813032940-2022292 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-20210813032940-2022292 --network addons-20210813032940-2022292 --ip 192.168.49.2 --volume addons-20210813032940-2022292:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0813 03:29:46.777489 2022781 cli_runner.go:115] Run: docker container inspect addons-20210813032940-2022292 --format={{.State.Running}}
	I0813 03:29:46.831248 2022781 cli_runner.go:115] Run: docker container inspect addons-20210813032940-2022292 --format={{.State.Status}}
	I0813 03:29:46.876305 2022781 cli_runner.go:115] Run: docker exec addons-20210813032940-2022292 stat /var/lib/dpkg/alternatives/iptables
	I0813 03:29:46.966276 2022781 oci.go:278] the created container "addons-20210813032940-2022292" has a running status.
	I0813 03:29:46.966302 2022781 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa...
	I0813 03:29:48.086545 2022781 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0813 03:30:00.277285 2022781 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-20210813032940-2022292:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: (14.130901108s)
	I0813 03:30:00.277310 2022781 kic.go:188] duration metric: took 14.131004 seconds to extract preloaded images to volume
	I0813 03:30:00.350109 2022781 cli_runner.go:115] Run: docker container inspect addons-20210813032940-2022292 --format={{.State.Status}}
	I0813 03:30:00.387273 2022781 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0813 03:30:00.387290 2022781 kic_runner.go:115] Args: [docker exec --privileged addons-20210813032940-2022292 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0813 03:30:00.480821 2022781 cli_runner.go:115] Run: docker container inspect addons-20210813032940-2022292 --format={{.State.Status}}
	I0813 03:30:00.523697 2022781 machine.go:88] provisioning docker machine ...
	I0813 03:30:00.523726 2022781 ubuntu.go:169] provisioning hostname "addons-20210813032940-2022292"
	I0813 03:30:00.523781 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:00.558122 2022781 main.go:130] libmachine: Using SSH client type: native
	I0813 03:30:00.558295 2022781 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50803 <nil> <nil>}
	I0813 03:30:00.558308 2022781 main.go:130] libmachine: About to run SSH command:
	sudo hostname addons-20210813032940-2022292 && echo "addons-20210813032940-2022292" | sudo tee /etc/hostname
	I0813 03:30:00.689627 2022781 main.go:130] libmachine: SSH cmd err, output: <nil>: addons-20210813032940-2022292
	
	I0813 03:30:00.689693 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:00.721994 2022781 main.go:130] libmachine: Using SSH client type: native
	I0813 03:30:00.722165 2022781 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50803 <nil> <nil>}
	I0813 03:30:00.722192 2022781 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-20210813032940-2022292' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-20210813032940-2022292/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-20210813032940-2022292' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 03:30:00.836190 2022781 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 03:30:00.836215 2022781 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e6
89d34b/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube}
	I0813 03:30:00.836235 2022781 ubuntu.go:177] setting up certificates
	I0813 03:30:00.836244 2022781 provision.go:83] configureAuth start
	I0813 03:30:00.836296 2022781 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210813032940-2022292
	I0813 03:30:00.866297 2022781 provision.go:137] copyHostCerts
	I0813 03:30:00.866361 2022781 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem (1078 bytes)
	I0813 03:30:00.866440 2022781 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem (1123 bytes)
	I0813 03:30:00.866493 2022781 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem (1679 bytes)
	I0813 03:30:00.866533 2022781 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem org=jenkins.addons-20210813032940-2022292 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-20210813032940-2022292]
	I0813 03:30:01.389006 2022781 provision.go:171] copyRemoteCerts
	I0813 03:30:01.389079 2022781 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 03:30:01.389121 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:01.419000 2022781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50803 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa Username:docker}
	I0813 03:30:01.502519 2022781 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 03:30:01.520038 2022781 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
	I0813 03:30:01.534523 2022781 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0813 03:30:01.548773 2022781 provision.go:86] duration metric: configureAuth took 712.517206ms
	I0813 03:30:01.548788 2022781 ubuntu.go:193] setting minikube options for container-runtime
	I0813 03:30:01.548937 2022781 machine.go:91] provisioned docker machine in 1.0252225s
	I0813 03:30:01.548943 2022781 client.go:171] LocalClient.Create took 20.380236744s
	I0813 03:30:01.548963 2022781 start.go:168] duration metric: libmachine.API.Create for "addons-20210813032940-2022292" took 20.380294582s
	I0813 03:30:01.548971 2022781 start.go:267] post-start starting for "addons-20210813032940-2022292" (driver="docker")
	I0813 03:30:01.548975 2022781 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 03:30:01.549015 2022781 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 03:30:01.549053 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:01.580251 2022781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50803 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa Username:docker}
	I0813 03:30:01.662251 2022781 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 03:30:01.664643 2022781 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 03:30:01.664666 2022781 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 03:30:01.664677 2022781 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 03:30:01.664684 2022781 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 03:30:01.664694 2022781 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/addons for local assets ...
	I0813 03:30:01.664745 2022781 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files for local assets ...
	I0813 03:30:01.664771 2022781 start.go:270] post-start completed in 115.793872ms
	I0813 03:30:01.665039 2022781 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210813032940-2022292
	I0813 03:30:01.693800 2022781 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/config.json ...
	I0813 03:30:01.694005 2022781 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 03:30:01.694053 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:01.721552 2022781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50803 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa Username:docker}
	I0813 03:30:01.801607 2022781 start.go:129] duration metric: createHost completed in 20.635699035s
	I0813 03:30:01.801629 2022781 start.go:80] releasing machines lock for "addons-20210813032940-2022292", held for 20.635816952s
	I0813 03:30:01.801697 2022781 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-20210813032940-2022292
	I0813 03:30:01.830486 2022781 ssh_runner.go:149] Run: systemctl --version
	I0813 03:30:01.830532 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:01.830558 2022781 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 03:30:01.830610 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:01.866554 2022781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50803 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa Username:docker}
	I0813 03:30:01.870429 2022781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50803 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa Username:docker}
	I0813 03:30:01.952247 2022781 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0813 03:30:02.115786 2022781 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0813 03:30:02.123997 2022781 docker.go:153] disabling docker service ...
	I0813 03:30:02.124041 2022781 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 03:30:02.145698 2022781 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 03:30:02.154128 2022781 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 03:30:02.230172 2022781 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 03:30:02.309742 2022781 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 03:30:02.317886 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 03:30:02.328545 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0813 03:30:02.342323 2022781 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 03:30:02.348596 2022781 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 03:30:02.353961 2022781 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 03:30:02.428966 2022781 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 03:30:02.561206 2022781 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0813 03:30:02.561311 2022781 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 03:30:02.564783 2022781 start.go:417] Will wait 60s for crictl version
	I0813 03:30:02.564854 2022781 ssh_runner.go:149] Run: sudo crictl version
	I0813 03:30:02.628354 2022781 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-13T03:30:02Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0813 03:30:13.675200 2022781 ssh_runner.go:149] Run: sudo crictl version
	I0813 03:30:13.709613 2022781 start.go:426] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.6
	RuntimeApiVersion:  v1alpha2
	I0813 03:30:13.709705 2022781 ssh_runner.go:149] Run: containerd --version
	I0813 03:30:13.733653 2022781 ssh_runner.go:149] Run: containerd --version
	I0813 03:30:13.757722 2022781 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.6 ...
	I0813 03:30:13.757794 2022781 cli_runner.go:115] Run: docker network inspect addons-20210813032940-2022292 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 03:30:13.786942 2022781 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0813 03:30:13.789852 2022781 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 03:30:13.798329 2022781 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 03:30:13.798392 2022781 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 03:30:13.822356 2022781 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 03:30:13.822377 2022781 containerd.go:517] Images already preloaded, skipping extraction
	I0813 03:30:13.822420 2022781 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 03:30:13.844103 2022781 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 03:30:13.844124 2022781 cache_images.go:74] Images are preloaded, skipping loading
	I0813 03:30:13.844175 2022781 ssh_runner.go:149] Run: sudo crictl info
	I0813 03:30:13.867458 2022781 cni.go:93] Creating CNI manager for ""
	I0813 03:30:13.867480 2022781 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 03:30:13.867493 2022781 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 03:30:13.867528 2022781 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-20210813032940-2022292 NodeName:addons-20210813032940-2022292 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFil
e:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 03:30:13.867709 2022781 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "addons-20210813032940-2022292"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 03:30:13.867799 2022781 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-20210813032940-2022292 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:addons-20210813032940-2022292 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0813 03:30:13.867861 2022781 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 03:30:13.874164 2022781 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 03:30:13.874220 2022781 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 03:30:13.880082 2022781 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (574 bytes)
	I0813 03:30:13.891242 2022781 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 03:30:13.902573 2022781 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2079 bytes)
	I0813 03:30:13.913737 2022781 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0813 03:30:13.916383 2022781 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 03:30:13.924200 2022781 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292 for IP: 192.168.49.2
	I0813 03:30:13.924238 2022781 certs.go:183] generating minikubeCA CA: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key
	I0813 03:30:14.303335 2022781 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt ...
	I0813 03:30:14.303366 2022781 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt: {Name:mk3901a19599d51a2d50c48585ff3f7192ba4433 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 03:30:14.303553 2022781 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key ...
	I0813 03:30:14.303570 2022781 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key: {Name:mk845cb200e03c80833445af29652075ca29c5ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 03:30:14.303661 2022781 certs.go:183] generating proxyClientCA CA: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key
	I0813 03:30:14.625439 2022781 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.crt ...
	I0813 03:30:14.625463 2022781 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.crt: {Name:mk50086ce36a18e239ef358ebe31b06ec58a54a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 03:30:14.625614 2022781 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key ...
	I0813 03:30:14.625629 2022781 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key: {Name:mkcd9f75f5685763d3008dae66cb562ca8ff349f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 03:30:14.625754 2022781 certs.go:294] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.key
	I0813 03:30:14.625769 2022781 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.crt with IP's: []
	I0813 03:30:14.981494 2022781 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.crt ...
	I0813 03:30:14.981520 2022781 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.crt: {Name:mk67389ffe06e3642f68dcb5d06f25c4a4286db0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 03:30:14.981694 2022781 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.key ...
	I0813 03:30:14.981709 2022781 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.key: {Name:mk98a53e6092aad61eaf9907276fc969c6b86e98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 03:30:14.981803 2022781 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/apiserver.key.dd3b5fb2
	I0813 03:30:14.981815 2022781 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 03:30:15.445439 2022781 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/apiserver.crt.dd3b5fb2 ...
	I0813 03:30:15.445467 2022781 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/apiserver.crt.dd3b5fb2: {Name:mk68008aff00f28fd78f3516c58a44d15f90967b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 03:30:15.445636 2022781 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/apiserver.key.dd3b5fb2 ...
	I0813 03:30:15.445650 2022781 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/apiserver.key.dd3b5fb2: {Name:mk75a5de72872e71c9f625f9410c2e8267bb030b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 03:30:15.445738 2022781 certs.go:305] copying /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/apiserver.crt
	I0813 03:30:15.445794 2022781 certs.go:309] copying /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/apiserver.key
	I0813 03:30:15.445841 2022781 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/proxy-client.key
	I0813 03:30:15.445852 2022781 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/proxy-client.crt with IP's: []
	I0813 03:30:16.134694 2022781 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/proxy-client.crt ...
	I0813 03:30:16.134726 2022781 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/proxy-client.crt: {Name:mkc9f3f094f59bf4cae95593974525020ed0791c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 03:30:16.134902 2022781 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/proxy-client.key ...
	I0813 03:30:16.134917 2022781 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/proxy-client.key: {Name:mk3f97104a527dd489a07fc16ea52fabc4e3c427 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 03:30:16.135088 2022781 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem (1675 bytes)
	I0813 03:30:16.135130 2022781 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem (1078 bytes)
	I0813 03:30:16.135160 2022781 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem (1123 bytes)
	I0813 03:30:16.135186 2022781 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem (1679 bytes)
	I0813 03:30:16.137695 2022781 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 03:30:16.153617 2022781 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 03:30:16.168774 2022781 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 03:30:16.183608 2022781 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0813 03:30:16.198875 2022781 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 03:30:16.214461 2022781 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 03:30:16.229908 2022781 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 03:30:16.245519 2022781 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0813 03:30:16.261024 2022781 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 03:30:16.276150 2022781 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 03:30:16.287817 2022781 ssh_runner.go:149] Run: openssl version
	I0813 03:30:16.292417 2022781 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 03:30:16.298969 2022781 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 03:30:16.301737 2022781 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 13 03:30 /usr/share/ca-certificates/minikubeCA.pem
	I0813 03:30:16.301792 2022781 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 03:30:16.306354 2022781 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 03:30:16.312798 2022781 kubeadm.go:390] StartCluster: {Name:addons-20210813032940-2022292 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:addons-20210813032940-2022292 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 03:30:16.312962 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0813 03:30:16.313019 2022781 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 03:30:16.341509 2022781 cri.go:76] found id: ""
	I0813 03:30:16.341598 2022781 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 03:30:16.348137 2022781 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 03:30:16.354334 2022781 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0813 03:30:16.354389 2022781 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 03:30:16.360245 2022781 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 03:30:16.360292 2022781 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0813 03:30:16.991998 2022781 out.go:204]   - Generating certificates and keys ...
	I0813 03:30:22.528739 2022781 out.go:204]   - Booting up control plane ...
	I0813 03:30:42.096737 2022781 out.go:204]   - Configuring RBAC rules ...
	I0813 03:30:42.513534 2022781 cni.go:93] Creating CNI manager for ""
	I0813 03:30:42.513560 2022781 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 03:30:42.515615 2022781 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 03:30:42.515681 2022781 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 03:30:42.519188 2022781 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0813 03:30:42.519210 2022781 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0813 03:30:42.531743 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 03:30:43.275208 2022781 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 03:30:43.275325 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:43.275388 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=dc1c3ca26e9449ce488a773126b8450402c94a19 minikube.k8s.io/name=addons-20210813032940-2022292 minikube.k8s.io/updated_at=2021_08_13T03_30_43_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:43.426738 2022781 ops.go:34] apiserver oom_adj: -16
	I0813 03:30:43.426841 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:44.011413 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:44.511783 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:45.011599 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:45.510878 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:46.011296 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:46.511321 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:47.010930 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:47.510919 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:48.011476 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:48.511258 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:49.010873 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:49.511635 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:50.010907 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:50.511782 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:51.011260 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:51.511532 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:52.011061 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:52.510863 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:53.010893 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:53.511752 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:54.011653 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:54.511235 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:55.011781 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:55.511793 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:56.011692 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:56.511006 2022781 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 03:30:56.652188 2022781 kubeadm.go:985] duration metric: took 13.376902139s to wait for elevateKubeSystemPrivileges.
	I0813 03:30:56.652210 2022781 kubeadm.go:392] StartCluster complete in 40.339416945s
	I0813 03:30:56.652225 2022781 settings.go:142] acquiring lock: {Name:mke0b9bf6059169e73bfde24fe8e8162c3ec0654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 03:30:56.652354 2022781 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 03:30:56.652762 2022781 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig: {Name:mk6797826f33680e9cda7cd38a7adfcabda9681c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 03:30:57.192592 2022781 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "addons-20210813032940-2022292" rescaled to 1
	I0813 03:30:57.192649 2022781 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 03:30:57.194497 2022781 out.go:177] * Verifying Kubernetes components...
	I0813 03:30:57.194581 2022781 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 03:30:57.192709 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 03:30:57.192937 2022781 addons.go:342] enableAddons start: toEnable=map[], additional=[registry metrics-server olm volumesnapshots csi-hostpath-driver ingress gcp-auth]
	I0813 03:30:57.194766 2022781 addons.go:59] Setting volumesnapshots=true in profile "addons-20210813032940-2022292"
	I0813 03:30:57.194794 2022781 addons.go:135] Setting addon volumesnapshots=true in "addons-20210813032940-2022292"
	I0813 03:30:57.194829 2022781 host.go:66] Checking if "addons-20210813032940-2022292" exists ...
	I0813 03:30:57.195355 2022781 cli_runner.go:115] Run: docker container inspect addons-20210813032940-2022292 --format={{.State.Status}}
	I0813 03:30:57.195502 2022781 addons.go:59] Setting ingress=true in profile "addons-20210813032940-2022292"
	I0813 03:30:57.195519 2022781 addons.go:135] Setting addon ingress=true in "addons-20210813032940-2022292"
	I0813 03:30:57.195539 2022781 host.go:66] Checking if "addons-20210813032940-2022292" exists ...
	I0813 03:30:57.195954 2022781 cli_runner.go:115] Run: docker container inspect addons-20210813032940-2022292 --format={{.State.Status}}
	I0813 03:30:57.196016 2022781 addons.go:59] Setting csi-hostpath-driver=true in profile "addons-20210813032940-2022292"
	I0813 03:30:57.196040 2022781 addons.go:135] Setting addon csi-hostpath-driver=true in "addons-20210813032940-2022292"
	I0813 03:30:57.196063 2022781 host.go:66] Checking if "addons-20210813032940-2022292" exists ...
	I0813 03:30:57.196472 2022781 cli_runner.go:115] Run: docker container inspect addons-20210813032940-2022292 --format={{.State.Status}}
	I0813 03:30:57.196530 2022781 addons.go:59] Setting default-storageclass=true in profile "addons-20210813032940-2022292"
	I0813 03:30:57.196541 2022781 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-20210813032940-2022292"
	I0813 03:30:57.196744 2022781 cli_runner.go:115] Run: docker container inspect addons-20210813032940-2022292 --format={{.State.Status}}
	I0813 03:30:57.196803 2022781 addons.go:59] Setting gcp-auth=true in profile "addons-20210813032940-2022292"
	I0813 03:30:57.196814 2022781 mustload.go:65] Loading cluster: addons-20210813032940-2022292
	I0813 03:30:57.197132 2022781 cli_runner.go:115] Run: docker container inspect addons-20210813032940-2022292 --format={{.State.Status}}
	I0813 03:30:57.197184 2022781 addons.go:59] Setting olm=true in profile "addons-20210813032940-2022292"
	I0813 03:30:57.197194 2022781 addons.go:135] Setting addon olm=true in "addons-20210813032940-2022292"
	I0813 03:30:57.197212 2022781 host.go:66] Checking if "addons-20210813032940-2022292" exists ...
	I0813 03:30:57.200564 2022781 cli_runner.go:115] Run: docker container inspect addons-20210813032940-2022292 --format={{.State.Status}}
	I0813 03:30:57.210302 2022781 addons.go:59] Setting metrics-server=true in profile "addons-20210813032940-2022292"
	I0813 03:30:57.210328 2022781 addons.go:135] Setting addon metrics-server=true in "addons-20210813032940-2022292"
	I0813 03:30:57.210364 2022781 host.go:66] Checking if "addons-20210813032940-2022292" exists ...
	I0813 03:30:57.210821 2022781 cli_runner.go:115] Run: docker container inspect addons-20210813032940-2022292 --format={{.State.Status}}
	I0813 03:30:57.210931 2022781 addons.go:59] Setting registry=true in profile "addons-20210813032940-2022292"
	I0813 03:30:57.210942 2022781 addons.go:135] Setting addon registry=true in "addons-20210813032940-2022292"
	I0813 03:30:57.210969 2022781 host.go:66] Checking if "addons-20210813032940-2022292" exists ...
	I0813 03:30:57.211436 2022781 cli_runner.go:115] Run: docker container inspect addons-20210813032940-2022292 --format={{.State.Status}}
	I0813 03:30:57.211498 2022781 addons.go:59] Setting storage-provisioner=true in profile "addons-20210813032940-2022292"
	I0813 03:30:57.211507 2022781 addons.go:135] Setting addon storage-provisioner=true in "addons-20210813032940-2022292"
	W0813 03:30:57.211512 2022781 addons.go:147] addon storage-provisioner should already be in state true
	I0813 03:30:57.211528 2022781 host.go:66] Checking if "addons-20210813032940-2022292" exists ...
	I0813 03:30:57.211909 2022781 cli_runner.go:115] Run: docker container inspect addons-20210813032940-2022292 --format={{.State.Status}}
	I0813 03:30:57.359893 2022781 out.go:177]   - Using image k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0
	I0813 03:30:57.359970 2022781 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0813 03:30:57.359983 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0813 03:30:57.360041 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:57.397371 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 03:30:57.398431 2022781 node_ready.go:35] waiting up to 6m0s for node "addons-20210813032940-2022292" to be "Ready" ...
	I0813 03:30:57.457671 2022781 out.go:177]   - Using image quay.io/operator-framework/olm:v0.17.0
	I0813 03:30:57.464684 2022781 out.go:177]   - Using image quay.io/operator-framework/upstream-community-operators:07bbc13
	I0813 03:30:57.571352 2022781 out.go:177]   - Using image k8s.gcr.io/metrics-server/metrics-server:v0.4.2
	I0813 03:30:57.571410 2022781 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0813 03:30:57.571426 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0813 03:30:57.571484 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:57.621898 2022781 out.go:177]   - Using image gcr.io/google_containers/kube-registry-proxy:0.4
	I0813 03:30:57.628181 2022781 out.go:177]   - Using image registry:2.7.1
	I0813 03:30:57.628300 2022781 addons.go:275] installing /etc/kubernetes/addons/registry-rc.yaml
	I0813 03:30:57.628309 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (788 bytes)
	I0813 03:30:57.628386 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:57.642700 2022781 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-attacher:v3.1.0
	I0813 03:30:57.645886 2022781 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-controller:v0.2.0
	I0813 03:30:57.671912 2022781 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
	I0813 03:30:57.675421 2022781 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0813 03:30:57.677167 2022781 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0813 03:30:57.677235 2022781 addons.go:275] installing /etc/kubernetes/addons/ingress-configmap.yaml
	I0813 03:30:57.677244 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-configmap.yaml (1865 bytes)
	I0813 03:30:57.677305 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:57.724233 2022781 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-snapshotter:v4.0.0
	I0813 03:30:57.727436 2022781 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-external-health-monitor-agent:v0.2.0
	I0813 03:30:57.735580 2022781 out.go:177]   - Using image k8s.gcr.io/sig-storage/hostpathplugin:v1.6.0
	I0813 03:30:57.742927 2022781 out.go:177]   - Using image k8s.gcr.io/sig-storage/livenessprobe:v2.2.0
	I0813 03:30:57.739683 2022781 addons.go:135] Setting addon default-storageclass=true in "addons-20210813032940-2022292"
	I0813 03:30:57.739724 2022781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50803 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa Username:docker}
	I0813 03:30:57.739755 2022781 host.go:66] Checking if "addons-20210813032940-2022292" exists ...
	I0813 03:30:57.724752 2022781 addons.go:275] installing /etc/kubernetes/addons/crds.yaml
	I0813 03:30:57.744017 2022781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50803 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa Username:docker}
	W0813 03:30:57.745611 2022781 addons.go:147] addon default-storageclass should already be in state true
	I0813 03:30:57.745618 2022781 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0
	I0813 03:30:57.760402 2022781 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1
	I0813 03:30:57.756368 2022781 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 03:30:57.756381 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/crds.yaml (825331 bytes)
	I0813 03:30:57.756761 2022781 host.go:66] Checking if "addons-20210813032940-2022292" exists ...
	I0813 03:30:57.765577 2022781 cli_runner.go:115] Run: docker container inspect addons-20210813032940-2022292 --format={{.State.Status}}
	I0813 03:30:57.765794 2022781 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 03:30:57.765818 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 03:30:57.765883 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:57.765965 2022781 out.go:177]   - Using image k8s.gcr.io/sig-storage/csi-resizer:v1.1.0
	I0813 03:30:57.766037 2022781 addons.go:275] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0813 03:30:57.766058 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0813 03:30:57.766113 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:57.766218 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:57.818821 2022781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50803 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa Username:docker}
	I0813 03:30:57.852135 2022781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50803 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa Username:docker}
	I0813 03:30:57.889195 2022781 ssh_runner.go:316] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0813 03:30:57.889281 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:57.970915 2022781 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 03:30:57.970933 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 03:30:57.970985 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:57.987520 2022781 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0813 03:30:57.987538 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1931 bytes)
	I0813 03:30:58.038677 2022781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50803 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa Username:docker}
	I0813 03:30:58.056460 2022781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50803 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa Username:docker}
	I0813 03:30:58.071053 2022781 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0813 03:30:58.071076 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0813 03:30:58.078413 2022781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50803 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa Username:docker}
	I0813 03:30:58.124468 2022781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50803 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa Username:docker}
	I0813 03:30:58.130893 2022781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50803 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa Username:docker}
	I0813 03:30:58.143390 2022781 addons.go:275] installing /etc/kubernetes/addons/registry-svc.yaml
	I0813 03:30:58.143407 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0813 03:30:58.145018 2022781 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0813 03:30:58.145035 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0813 03:30:58.183598 2022781 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0813 03:30:58.183619 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0813 03:30:58.215426 2022781 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 03:30:58.215446 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0813 03:30:58.221069 2022781 addons.go:275] installing /etc/kubernetes/addons/ingress-rbac.yaml
	I0813 03:30:58.221112 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-rbac.yaml (6005 bytes)
	I0813 03:30:58.238980 2022781 addons.go:275] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0813 03:30:58.239028 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (950 bytes)
	I0813 03:30:58.243449 2022781 addons.go:275] installing /etc/kubernetes/addons/ingress-dp.yaml
	I0813 03:30:58.243489 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-dp.yaml (9394 bytes)
	I0813 03:30:58.304607 2022781 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml
	I0813 03:30:58.310261 2022781 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0813 03:30:58.348483 2022781 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml
	I0813 03:30:58.348534 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml (2203 bytes)
	I0813 03:30:58.368606 2022781 addons.go:275] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0813 03:30:58.368664 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19584 bytes)
	I0813 03:30:58.373631 2022781 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0813 03:30:58.402890 2022781 addons.go:275] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0813 03:30:58.402938 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3037 bytes)
	I0813 03:30:58.441222 2022781 addons.go:275] installing /etc/kubernetes/addons/olm.yaml
	I0813 03:30:58.441281 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/olm.yaml (9882 bytes)
	I0813 03:30:58.446337 2022781 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 03:30:58.505463 2022781 addons.go:275] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0813 03:30:58.505525 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3428 bytes)
	I0813 03:30:58.556304 2022781 ssh_runner.go:316] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0813 03:30:58.577016 2022781 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.179580348s)
	I0813 03:30:58.577078 2022781 start.go:736] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0813 03:30:58.578408 2022781 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 03:30:58.608000 2022781 addons.go:275] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0813 03:30:58.608059 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (3666 bytes)
	I0813 03:30:58.663310 2022781 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0813 03:30:58.753249 2022781 addons.go:275] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0813 03:30:58.753272 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1071 bytes)
	I0813 03:30:58.754118 2022781 addons.go:275] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0813 03:30:58.754135 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2944 bytes)
	I0813 03:30:58.758582 2022781 addons.go:135] Setting addon gcp-auth=true in "addons-20210813032940-2022292"
	I0813 03:30:58.758626 2022781 host.go:66] Checking if "addons-20210813032940-2022292" exists ...
	I0813 03:30:58.759106 2022781 cli_runner.go:115] Run: docker container inspect addons-20210813032940-2022292 --format={{.State.Status}}
	I0813 03:30:58.821220 2022781 out.go:177]   - Using image jettech/kube-webhook-certgen:v1.3.0
	I0813 03:30:58.823073 2022781 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.0.6
	I0813 03:30:58.823123 2022781 addons.go:275] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0813 03:30:58.823138 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0813 03:30:58.823192 2022781 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-20210813032940-2022292
	I0813 03:30:58.886665 2022781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50803 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/addons-20210813032940-2022292/id_rsa Username:docker}
	I0813 03:30:58.922146 2022781 addons.go:275] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0813 03:30:58.922166 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3194 bytes)
	I0813 03:30:58.946902 2022781 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0813 03:30:58.975186 2022781 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0813 03:30:58.975210 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2421 bytes)
	I0813 03:30:58.994077 2022781 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0813 03:30:58.994096 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1034 bytes)
	I0813 03:30:59.121853 2022781 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0813 03:30:59.121918 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (6710 bytes)
	I0813 03:30:59.311772 2022781 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-provisioner.yaml
	I0813 03:30:59.311791 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-provisioner.yaml (2555 bytes)
	I0813 03:30:59.407661 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:30:59.447040 2022781 addons.go:275] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0813 03:30:59.447104 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (770 bytes)
	I0813 03:30:59.497092 2022781 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0813 03:30:59.497156 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2469 bytes)
	I0813 03:30:59.560580 2022781 addons.go:275] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0813 03:30:59.560641 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (4755 bytes)
	I0813 03:30:59.624771 2022781 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml
	I0813 03:30:59.624832 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml (2555 bytes)
	I0813 03:30:59.714273 2022781 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0813 03:30:59.850024 2022781 addons.go:275] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0813 03:30:59.850092 2022781 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0813 03:30:59.940480 2022781 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0813 03:31:01.011039 2022781 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (2.700714405s)
	I0813 03:31:01.011064 2022781 addons.go:313] Verifying addon registry=true in "addons-20210813032940-2022292"
	I0813 03:31:01.013288 2022781 out.go:177] * Verifying registry addon...
	I0813 03:31:01.014895 2022781 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0813 03:31:01.011401 2022781 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.637715134s)
	I0813 03:31:01.015041 2022781 addons.go:313] Verifying addon metrics-server=true in "addons-20210813032940-2022292"
	I0813 03:31:01.011418 2022781 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml: (2.70679067s)
	I0813 03:31:01.015053 2022781 addons.go:313] Verifying addon ingress=true in "addons-20210813032940-2022292"
	I0813 03:31:01.011558 2022781 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.565182254s)
	I0813 03:31:01.011669 2022781 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.433133243s)
	I0813 03:31:01.017502 2022781 out.go:177] * Verifying ingress addon...
	I0813 03:31:01.019575 2022781 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0813 03:31:01.054527 2022781 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0813 03:31:01.054568 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:01.075891 2022781 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0813 03:31:01.075946 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:01.441335 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:01.605451 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:01.606015 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:02.073277 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:02.097062 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:02.608493 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:02.684632 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:03.124082 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:03.130892 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:03.449076 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:03.559480 2022781 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.612543123s)
	W0813 03:31:03.559512 2022781 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0813 03:31:03.559537 2022781 retry.go:31] will retry after 360.127272ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: unable to recognize "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	I0813 03:31:03.559557 2022781 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (4.896220344s)
	W0813 03:31:03.559572 2022781 addons.go:296] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0813 03:31:03.559579 2022781 retry.go:31] will retry after 291.140013ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/catalogsources.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/clusterserviceversions.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/installplans.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operatorgroups.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/operators.operators.coreos.com created
	customresourcedefinition.apiextensions.k8s.io/subscriptions.operators.coreos.com created
	namespace/olm created
	namespace/operators created
	serviceaccount/olm-operator-serviceaccount created
	clusterrole.rbac.authorization.k8s.io/system:controller:operator-lifecycle-manager created
	clusterrolebinding.rbac.authorization.k8s.io/olm-operator-binding-olm created
	deployment.apps/olm-operator created
	deployment.apps/catalog-operator created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-edit created
	clusterrole.rbac.authorization.k8s.io/aggregate-olm-view created
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "OperatorGroup" in version "operators.coreos.com/v1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "ClusterServiceVersion" in version "operators.coreos.com/v1alpha1"
	unable to recognize "/etc/kubernetes/addons/olm.yaml": no matches for kind "CatalogSource" in version "operators.coreos.com/v1alpha1"
	I0813 03:31:03.559643 2022781 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (3.845306221s)
	I0813 03:31:03.559655 2022781 addons.go:313] Verifying addon gcp-auth=true in "addons-20210813032940-2022292"
	I0813 03:31:03.563647 2022781 out.go:177] * Verifying gcp-auth addon...
	I0813 03:31:03.565513 2022781 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0813 03:31:03.658489 2022781 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0813 03:31:03.658547 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:03.659098 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:03.679372 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:03.850862 2022781 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml
	I0813 03:31:03.920365 2022781 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0813 03:31:04.119619 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:04.140362 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:04.167248 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:04.576001 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:04.585046 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:04.682050 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:04.995932 2022781 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-agent.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-provisioner.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.05536482s)
	I0813 03:31:04.996001 2022781 addons.go:313] Verifying addon csi-hostpath-driver=true in "addons-20210813032940-2022292"
	I0813 03:31:04.999927 2022781 out.go:177] * Verifying csi-hostpath-driver addon...
	I0813 03:31:05.001725 2022781 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0813 03:31:05.051965 2022781 kapi.go:86] Found 5 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0813 03:31:05.052027 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:05.069278 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:05.101185 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:05.260498 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:05.490392 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:05.589788 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:05.591256 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:05.595113 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:05.661721 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:06.068823 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:06.075971 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:06.080721 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:06.092778 2022781 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/crds.yaml -f /etc/kubernetes/addons/olm.yaml: (2.241844201s)
	I0813 03:31:06.092890 2022781 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.172455914s)
	I0813 03:31:06.166316 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:06.559342 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:06.560990 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:06.578867 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:06.661958 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:07.056869 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:07.059136 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:07.078935 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:07.162308 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:07.558175 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:07.558364 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:07.579127 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:07.661829 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:07.908271 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:08.057749 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:08.060739 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:08.088631 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:08.166573 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:08.557935 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:08.560943 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:08.586563 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:08.661083 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:09.071552 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:09.071943 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:09.078283 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:09.161747 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:09.558910 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:09.560846 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:09.579424 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:09.661747 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:10.058388 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:10.059019 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:10.079633 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:10.161876 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:10.407358 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:10.556948 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:10.558279 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:10.578788 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:10.661635 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:11.057295 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:11.059187 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:11.079035 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:11.161068 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:11.557584 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:11.558891 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:11.578602 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:11.661347 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:12.057052 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:12.058710 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:12.079397 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:12.161365 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:12.407663 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:12.557331 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:12.558836 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:12.579555 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:12.661269 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:13.058425 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:13.058819 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:13.079522 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:13.161225 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:13.557182 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:13.559122 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:13.578539 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:13.662249 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:14.056888 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:14.058979 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:14.079485 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:14.161653 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:14.557930 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:14.559013 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:14.578707 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:14.662203 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:14.908057 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:15.058555 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:15.058960 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:15.079677 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:15.161960 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:15.556797 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:15.558599 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:15.579189 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:15.665520 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:16.058280 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:16.059533 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:16.078991 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:16.161336 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:16.557900 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:16.559209 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:16.578911 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:16.662278 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:16.908218 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:17.058976 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:17.062797 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:17.078598 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:17.161084 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:17.556309 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:17.558779 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:17.579316 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:17.661159 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:18.056254 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:18.058097 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:18.078601 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:18.161428 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:18.557023 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:18.558366 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:18.578650 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:18.660860 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:19.056829 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:19.057601 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:19.079058 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:19.160810 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:19.406830 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:19.557137 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:19.570593 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:19.578919 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:19.661198 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:20.056580 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:20.058011 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:20.078534 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:20.160728 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:20.556698 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:20.558044 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:20.578573 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:20.661305 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:21.055749 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:21.057234 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:21.078640 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:21.162419 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:21.407789 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:21.556491 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:21.558398 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:21.578920 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:21.661739 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:22.056835 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:22.059358 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:22.078758 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:22.161223 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:22.557289 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:22.558276 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:22.578866 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:22.661269 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:23.056957 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:23.058542 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:23.079179 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:23.160926 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:23.556244 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:23.565480 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:23.579030 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:23.661406 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:23.907689 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:24.056686 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:24.058284 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:24.078803 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:24.161543 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:24.557645 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:24.559094 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:24.578505 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:24.661225 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:25.056915 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:25.058921 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:25.079371 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:25.161221 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:25.558548 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:25.560666 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:25.578985 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:25.661314 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:25.908555 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:26.057813 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:26.060606 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:26.079261 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:26.162092 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:26.559363 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:26.563308 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:26.579390 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:26.662424 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:27.057172 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:27.058133 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:27.078618 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:27.160769 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:27.557054 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:27.558322 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:27.578892 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:27.661134 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:28.056649 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:28.058426 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:28.078843 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:28.161113 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:28.407578 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:28.556884 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:28.558849 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:28.579213 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:28.661872 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:29.062391 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:29.064747 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:29.079349 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:29.161744 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:29.559485 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:29.559615 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:29.588437 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:29.661647 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:30.176028 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:30.178278 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:30.178550 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:30.179384 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:30.557737 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:30.559734 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:30.579399 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:30.661487 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:30.908526 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:31.058560 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:31.058776 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:31.079841 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:31.162063 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:31.557377 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:31.559871 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:31.579905 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:31.662325 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:32.058012 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:32.059445 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:32.079203 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:32.162061 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:32.556579 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:32.558342 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:32.579193 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:32.661445 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:33.056971 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:33.062490 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:33.079405 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:33.161773 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:33.407515 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:33.557161 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:33.559301 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:33.578865 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:33.662094 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:34.058311 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:34.063650 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:34.079156 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:34.161208 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:34.556999 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:34.559078 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:34.578785 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:34.661861 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:35.057159 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:35.058229 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:35.078820 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:35.161890 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:35.407603 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:35.556952 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:35.559040 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:35.579535 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:35.661201 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:36.057747 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:36.059208 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:36.078661 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:36.161392 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:36.558465 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:36.559070 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:36.578566 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:36.661888 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:37.057392 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:37.060584 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:37.079304 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:37.161650 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:37.564374 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:37.565926 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:37.579818 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:37.661622 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:37.907609 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:38.058278 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:38.058920 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:38.079175 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:38.161440 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:38.557591 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:38.559313 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:38.579040 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:38.661840 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:39.056666 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:39.058149 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:39.078850 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:39.161258 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:39.556921 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:39.558888 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:39.579613 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:39.661796 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:40.057861 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:40.059831 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:40.079270 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:40.161703 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:40.407456 2022781 node_ready.go:58] node "addons-20210813032940-2022292" has status "Ready":"False"
	I0813 03:31:40.556897 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:40.558705 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:40.579932 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:40.661833 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:41.056472 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:41.059744 2022781 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0813 03:31:41.059763 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:41.079609 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:41.161642 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:41.407679 2022781 node_ready.go:49] node "addons-20210813032940-2022292" has status "Ready":"True"
	I0813 03:31:41.407707 2022781 node_ready.go:38] duration metric: took 44.009250418s waiting for node "addons-20210813032940-2022292" to be "Ready" ...
	I0813 03:31:41.407716 2022781 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 03:31:41.415156 2022781 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-69x4l" in "kube-system" namespace to be "Ready" ...
	I0813 03:31:41.558078 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:41.560565 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:41.579199 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:41.661913 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:42.056679 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:42.059509 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:42.079025 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:42.161532 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:42.556679 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:42.559194 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:42.579531 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:42.660981 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:43.057202 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:43.059371 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:43.079006 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:43.161759 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:43.434237 2022781 pod_ready.go:102] pod "coredns-558bd4d5db-69x4l" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-13 03:30:56 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0813 03:31:43.558666 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:43.558977 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:43.579098 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:43.662316 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:44.056791 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:44.058613 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:44.079594 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:44.161076 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:44.556840 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:44.559275 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:44.578741 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:44.661566 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:45.057098 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:45.058829 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:45.079389 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:45.161883 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:45.436262 2022781 pod_ready.go:102] pod "coredns-558bd4d5db-69x4l" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-13 03:30:56 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0813 03:31:45.575150 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:45.584294 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:45.585026 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:45.661603 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:46.057182 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:46.059769 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:46.079195 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:46.162319 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:46.556459 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:46.559487 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:46.580902 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:46.661798 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:47.057717 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:47.059285 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:47.079021 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:47.161669 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:47.438260 2022781 pod_ready.go:102] pod "coredns-558bd4d5db-69x4l" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-13 03:30:56 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0813 03:31:47.559550 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:47.559922 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:47.579440 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:47.661740 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:48.057821 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:48.061063 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:48.079047 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:48.161877 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:48.559849 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:48.560623 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:48.579150 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:48.662015 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:49.056892 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:49.058463 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:49.078937 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:49.163554 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:49.557752 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:49.564778 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:49.579620 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:49.661737 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:49.933528 2022781 pod_ready.go:102] pod "coredns-558bd4d5db-69x4l" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-13 03:30:56 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0813 03:31:50.061568 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:50.062915 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:50.078976 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:50.161696 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:50.557144 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:50.559099 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:50.578749 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:50.661769 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:51.058325 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:51.060796 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:51.082729 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:51.165438 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:51.558033 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:51.560059 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:51.578642 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:51.661962 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:51.937811 2022781 pod_ready.go:102] pod "coredns-558bd4d5db-69x4l" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-13 03:31:51 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0813 03:31:52.058037 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:52.065290 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:52.082306 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:52.162234 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:52.561858 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:52.563177 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:52.579176 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:52.662893 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:53.058836 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:53.059610 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:53.079567 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:53.161708 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:53.556763 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:53.560389 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:53.580992 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:53.668937 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:54.066272 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:54.066848 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:54.083319 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:54.162629 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:54.436037 2022781 pod_ready.go:102] pod "coredns-558bd4d5db-69x4l" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-13 03:31:51 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0813 03:31:54.556924 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:54.559716 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:54.579214 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:54.662311 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:55.059249 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:55.061042 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:55.078970 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:55.161897 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:55.557313 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:55.559641 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:55.579729 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:55.661895 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:56.059000 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:56.062237 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:56.079500 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:56.161665 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:56.436902 2022781 pod_ready.go:102] pod "coredns-558bd4d5db-69x4l" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-08-13 03:31:51 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0813 03:31:56.566261 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:56.566676 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:56.578767 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:56.661644 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:57.057090 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:57.059556 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:57.079502 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:57.161359 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:57.557584 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:57.559448 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:57.579110 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:57.661572 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:58.057085 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:58.058884 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:58.079179 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:58.161220 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:58.538956 2022781 pod_ready.go:102] pod "coredns-558bd4d5db-69x4l" in "kube-system" namespace has status "Ready":"False"
	I0813 03:31:58.557350 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:58.559691 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:58.579943 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:58.791628 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:59.059276 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:59.061564 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:59.079752 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:59.161666 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:59.435689 2022781 pod_ready.go:92] pod "coredns-558bd4d5db-69x4l" in "kube-system" namespace has status "Ready":"True"
	I0813 03:31:59.435753 2022781 pod_ready.go:81] duration metric: took 18.020568142s waiting for pod "coredns-558bd4d5db-69x4l" in "kube-system" namespace to be "Ready" ...
	I0813 03:31:59.435791 2022781 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-20210813032940-2022292" in "kube-system" namespace to be "Ready" ...
	I0813 03:31:59.441594 2022781 pod_ready.go:92] pod "etcd-addons-20210813032940-2022292" in "kube-system" namespace has status "Ready":"True"
	I0813 03:31:59.441612 2022781 pod_ready.go:81] duration metric: took 5.786951ms waiting for pod "etcd-addons-20210813032940-2022292" in "kube-system" namespace to be "Ready" ...
	I0813 03:31:59.441623 2022781 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-20210813032940-2022292" in "kube-system" namespace to be "Ready" ...
	I0813 03:31:59.445277 2022781 pod_ready.go:92] pod "kube-apiserver-addons-20210813032940-2022292" in "kube-system" namespace has status "Ready":"True"
	I0813 03:31:59.445293 2022781 pod_ready.go:81] duration metric: took 3.644132ms waiting for pod "kube-apiserver-addons-20210813032940-2022292" in "kube-system" namespace to be "Ready" ...
	I0813 03:31:59.445302 2022781 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-20210813032940-2022292" in "kube-system" namespace to be "Ready" ...
	I0813 03:31:59.449072 2022781 pod_ready.go:92] pod "kube-controller-manager-addons-20210813032940-2022292" in "kube-system" namespace has status "Ready":"True"
	I0813 03:31:59.449092 2022781 pod_ready.go:81] duration metric: took 3.769744ms waiting for pod "kube-controller-manager-addons-20210813032940-2022292" in "kube-system" namespace to be "Ready" ...
	I0813 03:31:59.449101 2022781 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9knsw" in "kube-system" namespace to be "Ready" ...
	I0813 03:31:59.452876 2022781 pod_ready.go:92] pod "kube-proxy-9knsw" in "kube-system" namespace has status "Ready":"True"
	I0813 03:31:59.452894 2022781 pod_ready.go:81] duration metric: took 3.786088ms waiting for pod "kube-proxy-9knsw" in "kube-system" namespace to be "Ready" ...
	I0813 03:31:59.452902 2022781 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-20210813032940-2022292" in "kube-system" namespace to be "Ready" ...
	I0813 03:31:59.557933 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:31:59.559887 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:31:59.579750 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:31:59.661390 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:31:59.833698 2022781 pod_ready.go:92] pod "kube-scheduler-addons-20210813032940-2022292" in "kube-system" namespace has status "Ready":"True"
	I0813 03:31:59.833721 2022781 pod_ready.go:81] duration metric: took 380.809632ms waiting for pod "kube-scheduler-addons-20210813032940-2022292" in "kube-system" namespace to be "Ready" ...
	I0813 03:31:59.833732 2022781 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-77c99ccb96-vn6tn" in "kube-system" namespace to be "Ready" ...
	I0813 03:32:00.057507 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:00.060450 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:00.094038 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:00.162156 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:00.559531 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:00.560807 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:00.581504 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:00.662211 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:01.059861 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:01.066023 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:01.079504 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:01.161767 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:01.559334 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:01.561759 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:01.579832 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:01.662353 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:02.056935 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:02.059383 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:02.079185 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:02.161090 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:02.240408 2022781 pod_ready.go:102] pod "metrics-server-77c99ccb96-vn6tn" in "kube-system" namespace has status "Ready":"False"
	I0813 03:32:02.561477 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:02.562142 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:02.579306 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:02.662331 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:03.059012 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:03.059508 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:03.079655 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:03.162089 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:03.559560 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:03.561402 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:03.579678 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:03.661723 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:04.059263 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:04.059783 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:04.078943 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:04.162284 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:04.241030 2022781 pod_ready.go:102] pod "metrics-server-77c99ccb96-vn6tn" in "kube-system" namespace has status "Ready":"False"
	I0813 03:32:04.557153 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:04.560141 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:04.579728 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:04.662513 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:05.057080 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:05.059685 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:05.079032 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:05.162737 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:05.557144 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:05.559484 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:05.579579 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:05.661242 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:06.057042 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:06.059442 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:06.079396 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:06.162304 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:06.557204 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:06.559494 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:06.579157 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:06.661762 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:06.740372 2022781 pod_ready.go:102] pod "metrics-server-77c99ccb96-vn6tn" in "kube-system" namespace has status "Ready":"False"
	I0813 03:32:07.057455 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:07.059940 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:07.079231 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:07.161987 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:07.559284 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:07.559814 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:07.579364 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:07.662813 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:08.073362 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:08.079626 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:08.084214 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:08.163307 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:08.559617 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:08.560235 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:08.579114 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:08.662066 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:08.741784 2022781 pod_ready.go:102] pod "metrics-server-77c99ccb96-vn6tn" in "kube-system" namespace has status "Ready":"False"
	I0813 03:32:09.056830 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:09.059188 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:09.088691 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:09.162660 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:09.562253 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:09.563021 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:09.579227 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:09.661934 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:10.059459 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:10.063371 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:10.079184 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:10.175815 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:10.593011 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:10.593586 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:10.594885 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:10.661226 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:11.058162 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:11.058643 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:11.079722 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:11.161708 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:11.240182 2022781 pod_ready.go:102] pod "metrics-server-77c99ccb96-vn6tn" in "kube-system" namespace has status "Ready":"False"
	I0813 03:32:11.558205 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:11.560235 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:11.586837 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:11.661385 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:11.739491 2022781 pod_ready.go:92] pod "metrics-server-77c99ccb96-vn6tn" in "kube-system" namespace has status "Ready":"True"
	I0813 03:32:11.739550 2022781 pod_ready.go:81] duration metric: took 11.905794872s waiting for pod "metrics-server-77c99ccb96-vn6tn" in "kube-system" namespace to be "Ready" ...
	I0813 03:32:11.739581 2022781 pod_ready.go:38] duration metric: took 30.331840863s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 03:32:11.739626 2022781 api_server.go:50] waiting for apiserver process to appear ...
	I0813 03:32:11.739656 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0813 03:32:11.739748 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0813 03:32:11.812172 2022781 cri.go:76] found id: "e34ccd127601990a58a3b0bf970ea62cbd1f776d1e31437a7826d21db94646c3"
	I0813 03:32:11.812187 2022781 cri.go:76] found id: ""
	I0813 03:32:11.812193 2022781 logs.go:270] 1 containers: [e34ccd127601990a58a3b0bf970ea62cbd1f776d1e31437a7826d21db94646c3]
	I0813 03:32:11.812263 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:11.814780 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0813 03:32:11.814825 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0813 03:32:11.843019 2022781 cri.go:76] found id: "3c1ce4b5f6d51d1115ca194a201f0f2319ae8953a147cd7517cafbb1fa677e11"
	I0813 03:32:11.843063 2022781 cri.go:76] found id: ""
	I0813 03:32:11.843075 2022781 logs.go:270] 1 containers: [3c1ce4b5f6d51d1115ca194a201f0f2319ae8953a147cd7517cafbb1fa677e11]
	I0813 03:32:11.843111 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:11.845551 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0813 03:32:11.845596 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0813 03:32:11.867609 2022781 cri.go:76] found id: "76df34c67e4d8322520aef9bf10ede58a153c61504ac00600249a57066b783fd"
	I0813 03:32:11.867624 2022781 cri.go:76] found id: ""
	I0813 03:32:11.867630 2022781 logs.go:270] 1 containers: [76df34c67e4d8322520aef9bf10ede58a153c61504ac00600249a57066b783fd]
	I0813 03:32:11.867685 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:11.870294 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0813 03:32:11.870336 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0813 03:32:11.894045 2022781 cri.go:76] found id: "ecb0d384c34ed158fd62ade4f570a1ed8cfaca3ee6c53ab12a2de2f06720576b"
	I0813 03:32:11.894086 2022781 cri.go:76] found id: ""
	I0813 03:32:11.894097 2022781 logs.go:270] 1 containers: [ecb0d384c34ed158fd62ade4f570a1ed8cfaca3ee6c53ab12a2de2f06720576b]
	I0813 03:32:11.894130 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:11.896655 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0813 03:32:11.896697 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0813 03:32:11.925307 2022781 cri.go:76] found id: "b57e0dbb56f139a4e11ed434de0567755dd4ffdb08231f247c678c04a27d72ea"
	I0813 03:32:11.925323 2022781 cri.go:76] found id: ""
	I0813 03:32:11.925330 2022781 logs.go:270] 1 containers: [b57e0dbb56f139a4e11ed434de0567755dd4ffdb08231f247c678c04a27d72ea]
	I0813 03:32:11.925365 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:11.927899 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0813 03:32:11.927939 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0813 03:32:11.949495 2022781 cri.go:76] found id: ""
	I0813 03:32:11.949534 2022781 logs.go:270] 0 containers: []
	W0813 03:32:11.949546 2022781 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0813 03:32:11.949552 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0813 03:32:11.949587 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0813 03:32:11.971549 2022781 cri.go:76] found id: "f251119960206b400f3c5fa69a1475ae4017f1585aedfcae1ba3a5e41c29eaca"
	I0813 03:32:11.971566 2022781 cri.go:76] found id: ""
	I0813 03:32:11.971571 2022781 logs.go:270] 1 containers: [f251119960206b400f3c5fa69a1475ae4017f1585aedfcae1ba3a5e41c29eaca]
	I0813 03:32:11.971620 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:11.974054 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0813 03:32:11.974095 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0813 03:32:12.003331 2022781 cri.go:76] found id: "fb47330aab572a8f0160ba80c3c0231858b45b60f6be68e94341c50ea5e9a377"
	I0813 03:32:12.003378 2022781 cri.go:76] found id: ""
	I0813 03:32:12.003394 2022781 logs.go:270] 1 containers: [fb47330aab572a8f0160ba80c3c0231858b45b60f6be68e94341c50ea5e9a377]
	I0813 03:32:12.003459 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:12.006140 2022781 logs.go:123] Gathering logs for kube-scheduler [ecb0d384c34ed158fd62ade4f570a1ed8cfaca3ee6c53ab12a2de2f06720576b] ...
	I0813 03:32:12.006155 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecb0d384c34ed158fd62ade4f570a1ed8cfaca3ee6c53ab12a2de2f06720576b"
	I0813 03:32:12.033019 2022781 logs.go:123] Gathering logs for containerd ...
	I0813 03:32:12.033040 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0813 03:32:12.068283 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:12.068688 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:12.079690 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:12.118724 2022781 logs.go:123] Gathering logs for kubelet ...
	I0813 03:32:12.118744 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0813 03:32:12.161825 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0813 03:32:12.177299 2022781 logs.go:138] Found kubelet problem: Aug 13 03:30:56 addons-20210813032940-2022292 kubelet[1185]: E0813 03:30:56.615910    1185 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210813032940-2022292" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-20210813032940-2022292' and this object
	W0813 03:32:12.177544 2022781 logs.go:138] Found kubelet problem: Aug 13 03:30:56 addons-20210813032940-2022292 kubelet[1185]: E0813 03:30:56.615970    1185 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-20210813032940-2022292" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-20210813032940-2022292' and this object
	I0813 03:32:12.214534 2022781 logs.go:123] Gathering logs for dmesg ...
	I0813 03:32:12.214555 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 03:32:12.230395 2022781 logs.go:123] Gathering logs for etcd [3c1ce4b5f6d51d1115ca194a201f0f2319ae8953a147cd7517cafbb1fa677e11] ...
	I0813 03:32:12.230412 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c1ce4b5f6d51d1115ca194a201f0f2319ae8953a147cd7517cafbb1fa677e11"
	I0813 03:32:12.255618 2022781 logs.go:123] Gathering logs for coredns [76df34c67e4d8322520aef9bf10ede58a153c61504ac00600249a57066b783fd] ...
	I0813 03:32:12.255638 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76df34c67e4d8322520aef9bf10ede58a153c61504ac00600249a57066b783fd"
	I0813 03:32:12.276571 2022781 logs.go:123] Gathering logs for kube-proxy [b57e0dbb56f139a4e11ed434de0567755dd4ffdb08231f247c678c04a27d72ea] ...
	I0813 03:32:12.276592 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b57e0dbb56f139a4e11ed434de0567755dd4ffdb08231f247c678c04a27d72ea"
	I0813 03:32:12.300947 2022781 logs.go:123] Gathering logs for storage-provisioner [f251119960206b400f3c5fa69a1475ae4017f1585aedfcae1ba3a5e41c29eaca] ...
	I0813 03:32:12.300966 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f251119960206b400f3c5fa69a1475ae4017f1585aedfcae1ba3a5e41c29eaca"
	I0813 03:32:12.323282 2022781 logs.go:123] Gathering logs for kube-controller-manager [fb47330aab572a8f0160ba80c3c0231858b45b60f6be68e94341c50ea5e9a377] ...
	I0813 03:32:12.323301 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb47330aab572a8f0160ba80c3c0231858b45b60f6be68e94341c50ea5e9a377"
	I0813 03:32:12.375068 2022781 logs.go:123] Gathering logs for container status ...
	I0813 03:32:12.375093 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 03:32:12.401272 2022781 logs.go:123] Gathering logs for describe nodes ...
	I0813 03:32:12.401291 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 03:32:12.548904 2022781 logs.go:123] Gathering logs for kube-apiserver [e34ccd127601990a58a3b0bf970ea62cbd1f776d1e31437a7826d21db94646c3] ...
	I0813 03:32:12.548930 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e34ccd127601990a58a3b0bf970ea62cbd1f776d1e31437a7826d21db94646c3"
	I0813 03:32:12.559375 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:12.562030 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:12.579464 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:12.659062 2022781 out.go:311] Setting ErrFile to fd 2...
	I0813 03:32:12.659085 2022781 out.go:345] TERM=,COLORTERM=, which probably does not support color
	W0813 03:32:12.659194 2022781 out.go:242] X Problems detected in kubelet:
	W0813 03:32:12.659204 2022781 out.go:242]   Aug 13 03:30:56 addons-20210813032940-2022292 kubelet[1185]: E0813 03:30:56.615910    1185 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210813032940-2022292" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-20210813032940-2022292' and this object
	W0813 03:32:12.659211 2022781 out.go:242]   Aug 13 03:30:56 addons-20210813032940-2022292 kubelet[1185]: E0813 03:30:56.615970    1185 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-20210813032940-2022292" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-20210813032940-2022292' and this object
	I0813 03:32:12.659217 2022781 out.go:311] Setting ErrFile to fd 2...
	I0813 03:32:12.659225 2022781 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 03:32:12.673862 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:13.059325 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:13.062776 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:13.087927 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:13.162237 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:13.560080 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:13.565340 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:13.579524 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:13.661442 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:14.060788 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:14.062591 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:14.079441 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:14.161105 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:14.556863 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:14.558839 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:14.579411 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:14.661226 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:15.058603 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:15.060542 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:15.080085 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:15.162195 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:15.558043 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:15.559656 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:15.580190 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:15.665749 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:16.057055 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:16.059113 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:16.079769 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:16.161497 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0813 03:32:16.559200 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:16.560087 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:16.579835 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:16.662097 2022781 kapi.go:108] duration metric: took 1m13.096582398s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0813 03:32:16.664068 2022781 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-20210813032940-2022292 cluster.
	I0813 03:32:16.670830 2022781 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0813 03:32:16.676788 2022781 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0813 03:32:17.057899 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:17.059706 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:17.079747 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:17.558152 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:17.560601 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:17.579004 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:18.061308 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:18.062057 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:18.079880 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:18.558150 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:18.560757 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:18.579782 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:19.056963 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:19.059210 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:19.078839 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:19.557242 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:19.559678 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:19.579861 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:20.056923 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:20.060320 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:20.078955 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:20.560350 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:20.561208 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:20.579305 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:21.067467 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:21.067845 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:21.080703 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:21.556957 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:21.564057 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:21.579253 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:22.062724 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:22.065424 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:22.080480 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:22.558437 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:22.563395 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:22.580348 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:22.659953 2022781 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 03:32:22.694110 2022781 api_server.go:70] duration metric: took 1m25.501422314s to wait for apiserver process to appear ...
	I0813 03:32:22.694173 2022781 api_server.go:86] waiting for apiserver healthz status ...
	I0813 03:32:22.694205 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0813 03:32:22.694282 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0813 03:32:22.732785 2022781 cri.go:76] found id: "e34ccd127601990a58a3b0bf970ea62cbd1f776d1e31437a7826d21db94646c3"
	I0813 03:32:22.732842 2022781 cri.go:76] found id: ""
	I0813 03:32:22.732861 2022781 logs.go:270] 1 containers: [e34ccd127601990a58a3b0bf970ea62cbd1f776d1e31437a7826d21db94646c3]
	I0813 03:32:22.732936 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:22.736230 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0813 03:32:22.736312 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0813 03:32:22.765159 2022781 cri.go:76] found id: "3c1ce4b5f6d51d1115ca194a201f0f2319ae8953a147cd7517cafbb1fa677e11"
	I0813 03:32:22.765209 2022781 cri.go:76] found id: ""
	I0813 03:32:22.765228 2022781 logs.go:270] 1 containers: [3c1ce4b5f6d51d1115ca194a201f0f2319ae8953a147cd7517cafbb1fa677e11]
	I0813 03:32:22.765308 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:22.768400 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0813 03:32:22.768493 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0813 03:32:22.825296 2022781 cri.go:76] found id: "76df34c67e4d8322520aef9bf10ede58a153c61504ac00600249a57066b783fd"
	I0813 03:32:22.825357 2022781 cri.go:76] found id: ""
	I0813 03:32:22.825375 2022781 logs.go:270] 1 containers: [76df34c67e4d8322520aef9bf10ede58a153c61504ac00600249a57066b783fd]
	I0813 03:32:22.825450 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:22.829195 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0813 03:32:22.829285 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0813 03:32:22.881181 2022781 cri.go:76] found id: "ecb0d384c34ed158fd62ade4f570a1ed8cfaca3ee6c53ab12a2de2f06720576b"
	I0813 03:32:22.881242 2022781 cri.go:76] found id: ""
	I0813 03:32:22.881259 2022781 logs.go:270] 1 containers: [ecb0d384c34ed158fd62ade4f570a1ed8cfaca3ee6c53ab12a2de2f06720576b]
	I0813 03:32:22.881329 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:22.885820 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0813 03:32:22.885908 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0813 03:32:22.982661 2022781 cri.go:76] found id: "b57e0dbb56f139a4e11ed434de0567755dd4ffdb08231f247c678c04a27d72ea"
	I0813 03:32:22.982714 2022781 cri.go:76] found id: ""
	I0813 03:32:22.982741 2022781 logs.go:270] 1 containers: [b57e0dbb56f139a4e11ed434de0567755dd4ffdb08231f247c678c04a27d72ea]
	I0813 03:32:22.982813 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:22.986328 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0813 03:32:22.986378 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0813 03:32:23.027000 2022781 cri.go:76] found id: ""
	I0813 03:32:23.027018 2022781 logs.go:270] 0 containers: []
	W0813 03:32:23.027025 2022781 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0813 03:32:23.027032 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0813 03:32:23.027083 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0813 03:32:23.060139 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:23.061278 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:23.069559 2022781 cri.go:76] found id: "f251119960206b400f3c5fa69a1475ae4017f1585aedfcae1ba3a5e41c29eaca"
	I0813 03:32:23.069608 2022781 cri.go:76] found id: ""
	I0813 03:32:23.069627 2022781 logs.go:270] 1 containers: [f251119960206b400f3c5fa69a1475ae4017f1585aedfcae1ba3a5e41c29eaca]
	I0813 03:32:23.069694 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:23.074554 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0813 03:32:23.074645 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0813 03:32:23.080185 2022781 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0813 03:32:23.110198 2022781 cri.go:76] found id: "fb47330aab572a8f0160ba80c3c0231858b45b60f6be68e94341c50ea5e9a377"
	I0813 03:32:23.110256 2022781 cri.go:76] found id: ""
	I0813 03:32:23.110274 2022781 logs.go:270] 1 containers: [fb47330aab572a8f0160ba80c3c0231858b45b60f6be68e94341c50ea5e9a377]
	I0813 03:32:23.110353 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:23.114338 2022781 logs.go:123] Gathering logs for containerd ...
	I0813 03:32:23.114442 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0813 03:32:23.226451 2022781 logs.go:123] Gathering logs for dmesg ...
	I0813 03:32:23.226481 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 03:32:23.241683 2022781 logs.go:123] Gathering logs for describe nodes ...
	I0813 03:32:23.241711 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 03:32:23.499313 2022781 logs.go:123] Gathering logs for kube-apiserver [e34ccd127601990a58a3b0bf970ea62cbd1f776d1e31437a7826d21db94646c3] ...
	I0813 03:32:23.499341 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e34ccd127601990a58a3b0bf970ea62cbd1f776d1e31437a7826d21db94646c3"
	I0813 03:32:23.556000 2022781 logs.go:123] Gathering logs for kube-controller-manager [fb47330aab572a8f0160ba80c3c0231858b45b60f6be68e94341c50ea5e9a377] ...
	I0813 03:32:23.556029 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb47330aab572a8f0160ba80c3c0231858b45b60f6be68e94341c50ea5e9a377"
	I0813 03:32:23.561069 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:23.563716 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:23.580520 2022781 kapi.go:108] duration metric: took 1m22.560942925s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0813 03:32:23.620196 2022781 logs.go:123] Gathering logs for kube-proxy [b57e0dbb56f139a4e11ed434de0567755dd4ffdb08231f247c678c04a27d72ea] ...
	I0813 03:32:23.620226 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b57e0dbb56f139a4e11ed434de0567755dd4ffdb08231f247c678c04a27d72ea"
	I0813 03:32:23.649357 2022781 logs.go:123] Gathering logs for storage-provisioner [f251119960206b400f3c5fa69a1475ae4017f1585aedfcae1ba3a5e41c29eaca] ...
	I0813 03:32:23.649384 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f251119960206b400f3c5fa69a1475ae4017f1585aedfcae1ba3a5e41c29eaca"
	I0813 03:32:23.680023 2022781 logs.go:123] Gathering logs for container status ...
	I0813 03:32:23.680049 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 03:32:23.722823 2022781 logs.go:123] Gathering logs for kubelet ...
	I0813 03:32:23.722853 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0813 03:32:23.784859 2022781 logs.go:138] Found kubelet problem: Aug 13 03:30:56 addons-20210813032940-2022292 kubelet[1185]: E0813 03:30:56.615910    1185 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210813032940-2022292" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-20210813032940-2022292' and this object
	W0813 03:32:23.785152 2022781 logs.go:138] Found kubelet problem: Aug 13 03:30:56 addons-20210813032940-2022292 kubelet[1185]: E0813 03:30:56.615970    1185 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-20210813032940-2022292" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-20210813032940-2022292' and this object
	I0813 03:32:23.823338 2022781 logs.go:123] Gathering logs for etcd [3c1ce4b5f6d51d1115ca194a201f0f2319ae8953a147cd7517cafbb1fa677e11] ...
	I0813 03:32:23.823392 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c1ce4b5f6d51d1115ca194a201f0f2319ae8953a147cd7517cafbb1fa677e11"
	I0813 03:32:23.857325 2022781 logs.go:123] Gathering logs for coredns [76df34c67e4d8322520aef9bf10ede58a153c61504ac00600249a57066b783fd] ...
	I0813 03:32:23.857354 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76df34c67e4d8322520aef9bf10ede58a153c61504ac00600249a57066b783fd"
	I0813 03:32:23.888625 2022781 logs.go:123] Gathering logs for kube-scheduler [ecb0d384c34ed158fd62ade4f570a1ed8cfaca3ee6c53ab12a2de2f06720576b] ...
	I0813 03:32:23.888649 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecb0d384c34ed158fd62ade4f570a1ed8cfaca3ee6c53ab12a2de2f06720576b"
	I0813 03:32:23.943578 2022781 out.go:311] Setting ErrFile to fd 2...
	I0813 03:32:23.943633 2022781 out.go:345] TERM=,COLORTERM=, which probably does not support color
	W0813 03:32:23.943825 2022781 out.go:242] X Problems detected in kubelet:
	W0813 03:32:23.943838 2022781 out.go:242]   Aug 13 03:30:56 addons-20210813032940-2022292 kubelet[1185]: E0813 03:30:56.615910    1185 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210813032940-2022292" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-20210813032940-2022292' and this object
	W0813 03:32:23.943847 2022781 out.go:242]   Aug 13 03:30:56 addons-20210813032940-2022292 kubelet[1185]: E0813 03:30:56.615970    1185 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-20210813032940-2022292" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-20210813032940-2022292' and this object
	I0813 03:32:23.943858 2022781 out.go:311] Setting ErrFile to fd 2...
	I0813 03:32:23.943863 2022781 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 03:32:24.063676 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:24.065290 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:24.557303 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:24.559593 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:25.058802 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:25.059726 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:25.557197 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:25.559980 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:26.057045 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:26.059553 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:26.556742 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:26.559573 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:27.058797 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0813 03:32:27.065273 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:27.557423 2022781 kapi.go:108] duration metric: took 1m22.555695527s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0813 03:32:27.567926 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:28.058729 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:28.558602 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:29.058767 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:29.558686 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:30.059333 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:30.558577 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:31.057998 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:31.558541 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:32.058329 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:32.558334 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:33.058677 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:33.559382 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:33.945317 2022781 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0813 03:32:33.954121 2022781 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0813 03:32:33.955027 2022781 api_server.go:139] control plane version: v1.21.3
	I0813 03:32:33.955048 2022781 api_server.go:129] duration metric: took 11.260858091s to wait for apiserver health ...
	I0813 03:32:33.955057 2022781 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 03:32:33.955075 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0813 03:32:33.955134 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0813 03:32:33.989217 2022781 cri.go:76] found id: "e34ccd127601990a58a3b0bf970ea62cbd1f776d1e31437a7826d21db94646c3"
	I0813 03:32:33.989241 2022781 cri.go:76] found id: ""
	I0813 03:32:33.989246 2022781 logs.go:270] 1 containers: [e34ccd127601990a58a3b0bf970ea62cbd1f776d1e31437a7826d21db94646c3]
	I0813 03:32:33.989289 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:33.991827 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0813 03:32:33.991873 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0813 03:32:34.015261 2022781 cri.go:76] found id: "3c1ce4b5f6d51d1115ca194a201f0f2319ae8953a147cd7517cafbb1fa677e11"
	I0813 03:32:34.015279 2022781 cri.go:76] found id: ""
	I0813 03:32:34.015285 2022781 logs.go:270] 1 containers: [3c1ce4b5f6d51d1115ca194a201f0f2319ae8953a147cd7517cafbb1fa677e11]
	I0813 03:32:34.015324 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:34.017874 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0813 03:32:34.017921 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0813 03:32:34.040647 2022781 cri.go:76] found id: "76df34c67e4d8322520aef9bf10ede58a153c61504ac00600249a57066b783fd"
	I0813 03:32:34.040664 2022781 cri.go:76] found id: ""
	I0813 03:32:34.040669 2022781 logs.go:270] 1 containers: [76df34c67e4d8322520aef9bf10ede58a153c61504ac00600249a57066b783fd]
	I0813 03:32:34.040711 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:34.043319 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0813 03:32:34.043370 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0813 03:32:34.059259 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:34.069010 2022781 cri.go:76] found id: "ecb0d384c34ed158fd62ade4f570a1ed8cfaca3ee6c53ab12a2de2f06720576b"
	I0813 03:32:34.069034 2022781 cri.go:76] found id: ""
	I0813 03:32:34.069040 2022781 logs.go:270] 1 containers: [ecb0d384c34ed158fd62ade4f570a1ed8cfaca3ee6c53ab12a2de2f06720576b]
	I0813 03:32:34.069080 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:34.071835 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0813 03:32:34.071887 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0813 03:32:34.095116 2022781 cri.go:76] found id: "b57e0dbb56f139a4e11ed434de0567755dd4ffdb08231f247c678c04a27d72ea"
	I0813 03:32:34.095139 2022781 cri.go:76] found id: ""
	I0813 03:32:34.095145 2022781 logs.go:270] 1 containers: [b57e0dbb56f139a4e11ed434de0567755dd4ffdb08231f247c678c04a27d72ea]
	I0813 03:32:34.095190 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:34.097821 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0813 03:32:34.097868 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0813 03:32:34.119306 2022781 cri.go:76] found id: ""
	I0813 03:32:34.119322 2022781 logs.go:270] 0 containers: []
	W0813 03:32:34.119328 2022781 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0813 03:32:34.119334 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0813 03:32:34.119379 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0813 03:32:34.142251 2022781 cri.go:76] found id: "f251119960206b400f3c5fa69a1475ae4017f1585aedfcae1ba3a5e41c29eaca"
	I0813 03:32:34.142273 2022781 cri.go:76] found id: ""
	I0813 03:32:34.142279 2022781 logs.go:270] 1 containers: [f251119960206b400f3c5fa69a1475ae4017f1585aedfcae1ba3a5e41c29eaca]
	I0813 03:32:34.142334 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:34.144992 2022781 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0813 03:32:34.145041 2022781 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0813 03:32:34.167211 2022781 cri.go:76] found id: "fb47330aab572a8f0160ba80c3c0231858b45b60f6be68e94341c50ea5e9a377"
	I0813 03:32:34.167227 2022781 cri.go:76] found id: ""
	I0813 03:32:34.167232 2022781 logs.go:270] 1 containers: [fb47330aab572a8f0160ba80c3c0231858b45b60f6be68e94341c50ea5e9a377]
	I0813 03:32:34.167272 2022781 ssh_runner.go:149] Run: which crictl
	I0813 03:32:34.169792 2022781 logs.go:123] Gathering logs for kube-controller-manager [fb47330aab572a8f0160ba80c3c0231858b45b60f6be68e94341c50ea5e9a377] ...
	I0813 03:32:34.169810 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb47330aab572a8f0160ba80c3c0231858b45b60f6be68e94341c50ea5e9a377"
	I0813 03:32:34.216311 2022781 logs.go:123] Gathering logs for containerd ...
	I0813 03:32:34.216383 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0813 03:32:34.298041 2022781 logs.go:123] Gathering logs for dmesg ...
	I0813 03:32:34.298069 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0813 03:32:34.310283 2022781 logs.go:123] Gathering logs for describe nodes ...
	I0813 03:32:34.310309 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0813 03:32:34.442777 2022781 logs.go:123] Gathering logs for kube-apiserver [e34ccd127601990a58a3b0bf970ea62cbd1f776d1e31437a7826d21db94646c3] ...
	I0813 03:32:34.442803 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e34ccd127601990a58a3b0bf970ea62cbd1f776d1e31437a7826d21db94646c3"
	I0813 03:32:34.491851 2022781 logs.go:123] Gathering logs for etcd [3c1ce4b5f6d51d1115ca194a201f0f2319ae8953a147cd7517cafbb1fa677e11] ...
	I0813 03:32:34.491910 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c1ce4b5f6d51d1115ca194a201f0f2319ae8953a147cd7517cafbb1fa677e11"
	I0813 03:32:34.521318 2022781 logs.go:123] Gathering logs for kube-proxy [b57e0dbb56f139a4e11ed434de0567755dd4ffdb08231f247c678c04a27d72ea] ...
	I0813 03:32:34.521347 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b57e0dbb56f139a4e11ed434de0567755dd4ffdb08231f247c678c04a27d72ea"
	I0813 03:32:34.544930 2022781 logs.go:123] Gathering logs for storage-provisioner [f251119960206b400f3c5fa69a1475ae4017f1585aedfcae1ba3a5e41c29eaca] ...
	I0813 03:32:34.544954 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f251119960206b400f3c5fa69a1475ae4017f1585aedfcae1ba3a5e41c29eaca"
	I0813 03:32:34.558879 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:34.568978 2022781 logs.go:123] Gathering logs for container status ...
	I0813 03:32:34.569002 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0813 03:32:34.595398 2022781 logs.go:123] Gathering logs for kubelet ...
	I0813 03:32:34.595422 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0813 03:32:34.648293 2022781 logs.go:138] Found kubelet problem: Aug 13 03:30:56 addons-20210813032940-2022292 kubelet[1185]: E0813 03:30:56.615910    1185 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210813032940-2022292" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-20210813032940-2022292' and this object
	W0813 03:32:34.648542 2022781 logs.go:138] Found kubelet problem: Aug 13 03:30:56 addons-20210813032940-2022292 kubelet[1185]: E0813 03:30:56.615970    1185 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-20210813032940-2022292" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-20210813032940-2022292' and this object
	I0813 03:32:34.694139 2022781 logs.go:123] Gathering logs for coredns [76df34c67e4d8322520aef9bf10ede58a153c61504ac00600249a57066b783fd] ...
	I0813 03:32:34.694164 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76df34c67e4d8322520aef9bf10ede58a153c61504ac00600249a57066b783fd"
	I0813 03:32:34.716899 2022781 logs.go:123] Gathering logs for kube-scheduler [ecb0d384c34ed158fd62ade4f570a1ed8cfaca3ee6c53ab12a2de2f06720576b] ...
	I0813 03:32:34.716924 2022781 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecb0d384c34ed158fd62ade4f570a1ed8cfaca3ee6c53ab12a2de2f06720576b"
	I0813 03:32:34.743237 2022781 out.go:311] Setting ErrFile to fd 2...
	I0813 03:32:34.743258 2022781 out.go:345] TERM=,COLORTERM=, which probably does not support color
	W0813 03:32:34.743376 2022781 out.go:242] X Problems detected in kubelet:
	W0813 03:32:34.743389 2022781 out.go:242]   Aug 13 03:30:56 addons-20210813032940-2022292 kubelet[1185]: E0813 03:30:56.615910    1185 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-20210813032940-2022292" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-20210813032940-2022292' and this object
	W0813 03:32:34.743396 2022781 out.go:242]   Aug 13 03:30:56 addons-20210813032940-2022292 kubelet[1185]: E0813 03:30:56.615970    1185 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-20210813032940-2022292" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-20210813032940-2022292' and this object
	I0813 03:32:34.743409 2022781 out.go:311] Setting ErrFile to fd 2...
	I0813 03:32:34.743414 2022781 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 03:32:35.059544 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:35.558823 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:36.058742 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:36.558321 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:37.059068 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:37.559539 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:38.059128 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:38.559174 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:39.058759 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:39.558560 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:40.059643 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:40.558958 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:41.058990 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:41.559162 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:42.057900 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:42.558835 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:43.059061 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:43.559393 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:44.058040 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:44.558542 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:44.758638 2022781 system_pods.go:59] 18 kube-system pods found
	I0813 03:32:44.758676 2022781 system_pods.go:61] "coredns-558bd4d5db-69x4l" [ef73518e-08da-4a27-a504-85f6e14fde4e] Running
	I0813 03:32:44.758682 2022781 system_pods.go:61] "csi-hostpath-attacher-0" [5b8c9e1d-36af-484a-8f71-8cbdc93e1848] Running
	I0813 03:32:44.758686 2022781 system_pods.go:61] "csi-hostpath-provisioner-0" [d285e137-046a-4c9f-8a5c-b513a07b4ac1] Running
	I0813 03:32:44.758691 2022781 system_pods.go:61] "csi-hostpath-resizer-0" [d005f1b6-ee22-4ef7-aabe-9ad79b904d8e] Running
	I0813 03:32:44.758696 2022781 system_pods.go:61] "csi-hostpath-snapshotter-0" [203926d6-73de-49f5-8477-3d1cf26d233e] Running
	I0813 03:32:44.758701 2022781 system_pods.go:61] "csi-hostpathplugin-0" [e5a40ad3-af6f-4aca-9d33-5d9620d28d85] Running
	I0813 03:32:44.758707 2022781 system_pods.go:61] "etcd-addons-20210813032940-2022292" [5e80a189-29fa-44b4-b290-7896746c4542] Running
	I0813 03:32:44.758712 2022781 system_pods.go:61] "kindnet-6qhgq" [41b60387-4d90-4496-a617-d04aaf6d654a] Running
	I0813 03:32:44.758717 2022781 system_pods.go:61] "kube-apiserver-addons-20210813032940-2022292" [e344bbc9-9190-49fe-915e-c8460a1fbe6e] Running
	I0813 03:32:44.758727 2022781 system_pods.go:61] "kube-controller-manager-addons-20210813032940-2022292" [d2701d00-a6a6-4ab5-b211-39592390ce8e] Running
	I0813 03:32:44.758731 2022781 system_pods.go:61] "kube-proxy-9knsw" [05bf3f71-808d-4e24-a416-a4434e16e0ac] Running
	I0813 03:32:44.758743 2022781 system_pods.go:61] "kube-scheduler-addons-20210813032940-2022292" [6488f7a4-94e7-41c1-b202-305d463dfac2] Running
	I0813 03:32:44.758747 2022781 system_pods.go:61] "metrics-server-77c99ccb96-vn6tn" [985bccb5-7c0b-4df0-91ce-0cd5e67a9688] Running
	I0813 03:32:44.758755 2022781 system_pods.go:61] "registry-5f6m6" [b842920d-03bf-4426-9765-5fb36b90afb9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0813 03:32:44.758768 2022781 system_pods.go:61] "registry-proxy-dg8n7" [3af763b1-1ec2-4a5b-b1c6-9f9a9c21d031] Running / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0813 03:32:44.758774 2022781 system_pods.go:61] "snapshot-controller-989f9ddc8-6wzsp" [4b25bcd7-a3bd-4549-9476-87a13b4022d1] Running
	I0813 03:32:44.758779 2022781 system_pods.go:61] "snapshot-controller-989f9ddc8-shj76" [0d371d4d-113a-4eb0-bb5d-4d52d2ecf7a5] Running
	I0813 03:32:44.758784 2022781 system_pods.go:61] "storage-provisioner" [9788a546-bd3b-45bb-98c8-f5dc3efa1001] Running
	I0813 03:32:44.758792 2022781 system_pods.go:74] duration metric: took 10.803729677s to wait for pod list to return data ...
	I0813 03:32:44.758802 2022781 default_sa.go:34] waiting for default service account to be created ...
	I0813 03:32:44.761387 2022781 default_sa.go:45] found service account: "default"
	I0813 03:32:44.761408 2022781 default_sa.go:55] duration metric: took 2.593402ms for default service account to be created ...
	I0813 03:32:44.761414 2022781 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 03:32:44.774394 2022781 system_pods.go:86] 18 kube-system pods found
	I0813 03:32:44.774420 2022781 system_pods.go:89] "coredns-558bd4d5db-69x4l" [ef73518e-08da-4a27-a504-85f6e14fde4e] Running
	I0813 03:32:44.774427 2022781 system_pods.go:89] "csi-hostpath-attacher-0" [5b8c9e1d-36af-484a-8f71-8cbdc93e1848] Running
	I0813 03:32:44.774432 2022781 system_pods.go:89] "csi-hostpath-provisioner-0" [d285e137-046a-4c9f-8a5c-b513a07b4ac1] Running
	I0813 03:32:44.774441 2022781 system_pods.go:89] "csi-hostpath-resizer-0" [d005f1b6-ee22-4ef7-aabe-9ad79b904d8e] Running
	I0813 03:32:44.774451 2022781 system_pods.go:89] "csi-hostpath-snapshotter-0" [203926d6-73de-49f5-8477-3d1cf26d233e] Running
	I0813 03:32:44.774456 2022781 system_pods.go:89] "csi-hostpathplugin-0" [e5a40ad3-af6f-4aca-9d33-5d9620d28d85] Running
	I0813 03:32:44.774464 2022781 system_pods.go:89] "etcd-addons-20210813032940-2022292" [5e80a189-29fa-44b4-b290-7896746c4542] Running
	I0813 03:32:44.774469 2022781 system_pods.go:89] "kindnet-6qhgq" [41b60387-4d90-4496-a617-d04aaf6d654a] Running
	I0813 03:32:44.774475 2022781 system_pods.go:89] "kube-apiserver-addons-20210813032940-2022292" [e344bbc9-9190-49fe-915e-c8460a1fbe6e] Running
	I0813 03:32:44.774484 2022781 system_pods.go:89] "kube-controller-manager-addons-20210813032940-2022292" [d2701d00-a6a6-4ab5-b211-39592390ce8e] Running
	I0813 03:32:44.774489 2022781 system_pods.go:89] "kube-proxy-9knsw" [05bf3f71-808d-4e24-a416-a4434e16e0ac] Running
	I0813 03:32:44.774497 2022781 system_pods.go:89] "kube-scheduler-addons-20210813032940-2022292" [6488f7a4-94e7-41c1-b202-305d463dfac2] Running
	I0813 03:32:44.774502 2022781 system_pods.go:89] "metrics-server-77c99ccb96-vn6tn" [985bccb5-7c0b-4df0-91ce-0cd5e67a9688] Running
	I0813 03:32:44.774514 2022781 system_pods.go:89] "registry-5f6m6" [b842920d-03bf-4426-9765-5fb36b90afb9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0813 03:32:44.774522 2022781 system_pods.go:89] "registry-proxy-dg8n7" [3af763b1-1ec2-4a5b-b1c6-9f9a9c21d031] Running / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0813 03:32:44.774530 2022781 system_pods.go:89] "snapshot-controller-989f9ddc8-6wzsp" [4b25bcd7-a3bd-4549-9476-87a13b4022d1] Running
	I0813 03:32:44.774536 2022781 system_pods.go:89] "snapshot-controller-989f9ddc8-shj76" [0d371d4d-113a-4eb0-bb5d-4d52d2ecf7a5] Running
	I0813 03:32:44.774545 2022781 system_pods.go:89] "storage-provisioner" [9788a546-bd3b-45bb-98c8-f5dc3efa1001] Running
	I0813 03:32:44.774550 2022781 system_pods.go:126] duration metric: took 13.132138ms to wait for k8s-apps to be running ...
	I0813 03:32:44.774559 2022781 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 03:32:44.774606 2022781 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 03:32:44.786726 2022781 system_svc.go:56] duration metric: took 12.16177ms WaitForService to wait for kubelet.
	I0813 03:32:44.786742 2022781 kubeadm.go:547] duration metric: took 1m47.594069487s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 03:32:44.786764 2022781 node_conditions.go:102] verifying NodePressure condition ...
	I0813 03:32:44.790042 2022781 node_conditions.go:122] node storage ephemeral capacity is 40474572Ki
	I0813 03:32:44.790072 2022781 node_conditions.go:123] node cpu capacity is 2
	I0813 03:32:44.790084 2022781 node_conditions.go:105] duration metric: took 3.315795ms to run NodePressure ...
	I0813 03:32:44.790093 2022781 start.go:231] waiting for startup goroutines ...
	I0813 03:32:45.059527 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:45.558450 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:46.058008 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:46.559409 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:47.058981 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:47.559418 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:48.058295 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:48.559417 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:49.058479 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:49.558767 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:50.059036 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:50.559080 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:51.058876 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:51.559246 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:52.059058 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:52.559792 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:53.059872 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:53.559369 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:54.059719 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:54.558692 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:55.059624 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:55.558707 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:56.058551 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:56.558705 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:57.059182 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:57.558288 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:58.058885 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:58.559208 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:59.058341 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:32:59.558602 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:00.058816 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:00.559084 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:01.065657 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:01.558657 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:02.059321 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:02.559541 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:03.059005 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:03.559631 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:04.058927 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:04.558678 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:05.059707 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:05.558780 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:06.058956 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:06.559528 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:07.059164 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:07.558609 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:08.059085 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:08.558274 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:09.058026 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:09.558984 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:10.058747 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:10.558098 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:11.058941 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:11.559199 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:12.059601 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:12.562803 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:13.058537 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:13.558131 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:14.059592 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:14.558844 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:15.059696 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:15.559326 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:16.059259 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:16.558421 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:17.058713 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:17.558903 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:18.059615 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:18.558863 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:19.058406 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:19.558672 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:20.059847 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:20.558399 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:21.137157 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:21.558717 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:22.059791 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:22.559467 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:23.058838 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:23.558887 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:24.060067 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:24.559548 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:25.058905 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:25.558800 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:26.059414 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:26.558703 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:27.059411 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:27.559626 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:28.059432 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:28.559361 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:29.059128 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:29.558985 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:30.062482 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:30.558231 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:31.058771 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:31.558406 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:32.059047 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:32.559142 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:33.058540 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:33.557931 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:34.058498 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:34.558400 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:35.058954 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:35.559196 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:36.059489 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:36.558425 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:37.059036 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:37.559521 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:38.059863 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:38.558898 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:39.059513 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:39.558045 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:40.059686 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:40.560770 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:41.059054 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:41.566863 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:42.059037 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:42.558754 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:43.060014 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:43.558325 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:44.058848 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:44.558941 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:45.059055 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:45.559095 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:46.059579 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:46.558476 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:47.059065 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:47.559727 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:48.058950 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:48.559632 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:49.075110 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:49.558722 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:50.058944 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:50.558547 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:51.058757 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:51.559264 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:52.059295 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:52.558567 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:53.059202 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:53.558807 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:54.066706 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:54.559004 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:55.059681 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:55.558717 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:56.059365 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:56.558292 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:57.059264 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:57.558279 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:58.058461 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:58.558727 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:59.059322 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:33:59.559163 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:00.058942 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:00.558919 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:01.058921 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:01.558792 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:02.058858 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:02.558354 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:03.058655 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:03.558982 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:04.059730 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:04.558307 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:05.059036 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:05.559489 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:06.059148 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:06.558284 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:07.059011 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:07.559645 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:08.059884 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:08.558543 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:09.059046 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:09.559657 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:10.059052 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:10.559013 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:11.058582 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:11.558297 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:12.059000 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:12.558873 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:13.059650 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:13.559364 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:14.059969 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:14.565763 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:15.059630 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:15.558772 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:16.059717 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:16.559317 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:17.059424 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:17.558716 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:18.058577 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:18.557995 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:19.059142 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:19.558516 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:20.058731 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:20.558597 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:21.058881 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:21.639918 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:22.059768 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:22.558594 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:23.058973 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:23.558827 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:24.059270 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:24.558647 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:25.059468 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:25.559237 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:26.058932 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:26.559092 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:27.059169 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:27.559544 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:28.064873 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:28.559416 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:29.058950 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:29.559331 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:30.059119 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:30.558882 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:31.059118 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:31.558810 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:32.058711 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:32.559098 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:33.059059 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:33.559380 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:34.059186 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:34.558928 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:35.059958 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:35.559653 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:36.059818 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:36.559509 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:37.058808 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:37.559390 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:38.059732 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:38.558788 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:39.059321 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:39.558685 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:40.069653 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:40.559011 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:41.058420 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:41.558616 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:42.058600 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:42.558619 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:43.058712 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:43.558863 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:44.059093 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:44.557925 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:45.059096 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:45.558755 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:46.059372 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:46.558486 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:47.058418 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:47.558666 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:48.059423 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:48.559159 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:49.059355 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:49.558764 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:50.058409 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:50.558418 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:51.058627 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:51.565864 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:52.059506 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:52.559260 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:53.059527 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:53.558611 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:54.058863 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:54.559270 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:55.058897 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:55.559362 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:56.059376 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:56.558455 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:57.058989 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:57.558669 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:58.061153 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:58.559011 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:59.058914 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:34:59.559446 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:00.059100 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:00.558679 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:01.059516 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:01.559073 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:02.058157 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:02.559296 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:03.062289 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:03.558892 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:04.059315 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:04.558372 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:05.058655 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:05.558400 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:06.059479 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:06.559682 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:07.059026 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:07.558682 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:08.059829 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:08.558168 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:09.058282 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:09.558248 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:10.058659 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:10.558729 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:11.058984 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:11.558444 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:12.063399 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:12.559537 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:13.058028 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:13.559081 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:14.062176 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:14.558669 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:15.058706 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:15.558218 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:16.059595 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:16.558859 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:17.063096 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:17.558100 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:18.058344 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:18.558730 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:19.058484 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:19.558765 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:20.058970 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:20.559332 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:21.058416 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:21.559555 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:22.058838 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:22.558992 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:23.059161 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:23.558649 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:24.059506 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:24.558809 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:25.058701 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:25.560059 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:26.066673 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:26.559251 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:27.058215 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:27.558720 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:28.059602 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:28.558925 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:29.059077 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:29.558468 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:30.058745 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:30.559228 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:31.057902 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:31.559288 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:32.058870 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:32.558785 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:33.059631 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:33.558142 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:34.058639 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:34.558642 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:35.058709 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:35.558816 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:36.058874 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:36.558786 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:37.069666 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:37.562046 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:38.059673 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:38.558403 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:39.058697 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:39.558063 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:40.059072 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:40.559148 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:41.058169 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:41.559113 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:42.060212 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:42.558264 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:43.058718 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:43.558468 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:44.059745 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:44.558972 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:45.059760 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:45.558794 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:46.059066 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:46.558262 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:47.058933 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:47.558864 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:48.059706 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:48.559293 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:49.059773 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:49.567350 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:50.058758 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:50.559247 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:51.058051 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:51.559512 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:52.059753 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:52.558876 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:53.058844 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:53.557996 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:54.059339 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:54.557861 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:55.058559 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:55.558563 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:56.059269 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:56.559290 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:57.058409 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:57.558536 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:58.059072 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:58.559398 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:59.058700 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:35:59.558646 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:00.058702 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:00.558682 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:01.058670 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:01.562293 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:02.058643 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:02.558304 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:03.059405 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:03.559047 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:04.062121 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:04.558683 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:05.059301 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:05.558453 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:06.059156 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:06.558705 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:07.059273 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:07.558538 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:08.059190 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:08.559068 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:09.059945 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:09.558740 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:10.059106 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:10.558472 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:11.059243 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:11.558710 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:12.059732 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:12.558567 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:13.059209 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:13.558794 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:14.063337 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:14.558442 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:15.058715 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:15.559174 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:16.058830 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:16.559240 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:17.058916 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:17.559538 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:18.058831 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:18.558826 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:19.058516 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:19.558183 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:20.058188 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:20.558110 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:21.058024 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:21.558235 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:22.058995 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:22.619655 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:23.059154 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:23.558587 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:24.059440 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:24.558332 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:25.059619 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:25.558688 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:26.062144 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:26.558312 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:27.059458 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:27.559085 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:28.058413 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:28.558317 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:29.059543 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:29.557925 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:30.058612 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:30.558558 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:31.058092 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:31.558239 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:32.059323 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:32.558001 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:33.059257 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:33.558742 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:34.058873 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:34.558418 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:35.058383 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:35.558235 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:36.058836 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:36.559150 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:37.059481 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:37.558194 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:38.058814 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:38.559211 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:39.059575 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:39.558350 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:40.058603 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:40.559147 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:41.059124 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:41.558651 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:42.059562 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:42.558506 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:43.058986 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:43.558651 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:44.059589 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:44.558516 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:45.059125 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:45.558154 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:46.058948 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:46.558804 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:47.059781 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:47.558728 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:48.059027 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:48.559730 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:49.059279 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:49.578391 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:50.058436 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:50.558971 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:51.058593 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:51.558295 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:52.058886 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:52.559329 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:53.058756 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:53.557754 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:54.059400 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:54.558199 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:55.058119 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:55.560244 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:56.059004 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:56.561964 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:57.059323 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:57.558414 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:58.058938 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:58.558617 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:59.059283 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:36:59.574886 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:37:00.059290 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:37:00.558331 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:37:01.058314 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:37:01.061314 2022781 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0813 03:37:01.061337 2022781 kapi.go:108] duration metric: took 6m0.046443715s to wait for kubernetes.io/minikube-addons=registry ...
	W0813 03:37:01.061452 2022781 out.go:242] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: timed out waiting for the condition]
	I0813 03:37:01.063789 2022781 out.go:177] * Enabled addons: metrics-server, default-storageclass, storage-provisioner, olm, volumesnapshots, gcp-auth, ingress, csi-hostpath-driver
	I0813 03:37:01.063812 2022781 addons.go:344] enableAddons completed in 6m3.870880492s
	I0813 03:37:01.394020 2022781 start.go:462] kubectl: 1.21.3, cluster: 1.21.3 (minor skew: 0)
	I0813 03:37:01.396720 2022781 out.go:177] * Done! kubectl is now configured to use "addons-20210813032940-2022292" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	5e5d9abdcdbf4       d544402579747       57 seconds ago       Exited              olm-operator              8                   c13c7bddfc538
	dfbff09d21c82       d544402579747       About a minute ago   Exited              catalog-operator          8                   4f10e3f5836dd
	9e12a8c2bdf14       60dc18151daf8       About a minute ago   Exited              registry-proxy            8                   0f0d8cb5ccd61
	7b688e645fd22       1611cd07b61d5       7 minutes ago        Running             busybox                   0                   335b10c73edd4
	eab8e5f488bb8       357aab9e21a8d       11 minutes ago       Running             registry                  0                   e960d2aa7dc02
	92c18b0912a62       bac9ddccb0c70       16 minutes ago       Running             controller                0                   7b09157779528
	95dcf4a47993d       a883f7fc35610       17 minutes ago       Exited              patch                     0                   537ace0ace14b
	e912ae66fde6f       a883f7fc35610       17 minutes ago       Exited              create                    0                   02f6733c69e7f
	76df34c67e4d8       1a1f05a2cd7c2       17 minutes ago       Running             coredns                   0                   4013fbad24448
	f251119960206       ba04bb24b9575       17 minutes ago       Running             storage-provisioner       0                   4925d0c76d0fb
	b57e0dbb56f13       4ea38350a1beb       18 minutes ago       Running             kube-proxy                0                   77766a5e4eba5
	e811021829de7       f37b7c809e5dc       18 minutes ago       Running             kindnet-cni               0                   84d8cbe537f13
	fb47330aab572       cb310ff289d79       18 minutes ago       Running             kube-controller-manager   0                   c35a71b0e178a
	e34ccd1276019       44a6d50ef170d       18 minutes ago       Running             kube-apiserver            0                   802bb6c418a36
	3c1ce4b5f6d51       05b738aa1bc63       18 minutes ago       Running             etcd                      0                   a0068af440460
	ecb0d384c34ed       31a3b96cefc1e       18 minutes ago       Running             kube-scheduler            0                   6d9eb8373b6c3
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-08-13 03:29:47 UTC, end at Fri 2021-08-13 03:49:12 UTC. --
	Aug 13 03:48:58 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:48:58.941771745Z" level=info msg="shim disconnected" id=12613d1687ffb354fff14850c8c78b8948a0727abbffb228342591923b8c9f83
	Aug 13 03:48:58 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:48:58.941964893Z" level=error msg="copy shim log" error="read /proc/self/fd/274: file already closed"
	Aug 13 03:48:58 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:48:58.944127318Z" level=info msg="StopContainer for \"12613d1687ffb354fff14850c8c78b8948a0727abbffb228342591923b8c9f83\" returns successfully"
	Aug 13 03:48:58 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:48:58.944753858Z" level=info msg="StopPodSandbox for \"82923a3fd96f8bbce913558bf3635f6e270760ff9a9a85d91e84d00493f3d204\""
	Aug 13 03:48:58 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:48:58.944897282Z" level=info msg="Container to stop \"12613d1687ffb354fff14850c8c78b8948a0727abbffb228342591923b8c9f83\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 13 03:48:58 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:48:58.977575464Z" level=info msg="TaskExit event &TaskExit{ContainerID:82923a3fd96f8bbce913558bf3635f6e270760ff9a9a85d91e84d00493f3d204,ID:82923a3fd96f8bbce913558bf3635f6e270760ff9a9a85d91e84d00493f3d204,Pid:3538,ExitStatus:137,ExitedAt:2021-08-13 03:48:58.977419124 +0000 UTC,XXX_unrecognized:[],}"
	Aug 13 03:48:59 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:48:59.051393916Z" level=info msg="shim disconnected" id=82923a3fd96f8bbce913558bf3635f6e270760ff9a9a85d91e84d00493f3d204
	Aug 13 03:48:59 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:48:59.051603089Z" level=error msg="copy shim log" error="read /proc/self/fd/188: file already closed"
	Aug 13 03:48:59 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:48:59.143159721Z" level=info msg="TearDown network for sandbox \"82923a3fd96f8bbce913558bf3635f6e270760ff9a9a85d91e84d00493f3d204\" successfully"
	Aug 13 03:48:59 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:48:59.143304738Z" level=info msg="StopPodSandbox for \"82923a3fd96f8bbce913558bf3635f6e270760ff9a9a85d91e84d00493f3d204\" returns successfully"
	Aug 13 03:48:59 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:48:59.754601064Z" level=info msg="StopPodSandbox for \"82923a3fd96f8bbce913558bf3635f6e270760ff9a9a85d91e84d00493f3d204\""
	Aug 13 03:48:59 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:48:59.754773428Z" level=info msg="Container to stop \"12613d1687ffb354fff14850c8c78b8948a0727abbffb228342591923b8c9f83\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 13 03:48:59 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:48:59.763224022Z" level=info msg="RemoveContainer for \"12613d1687ffb354fff14850c8c78b8948a0727abbffb228342591923b8c9f83\""
	Aug 13 03:48:59 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:48:59.833040947Z" level=info msg="RemoveContainer for \"12613d1687ffb354fff14850c8c78b8948a0727abbffb228342591923b8c9f83\" returns successfully"
	Aug 13 03:48:59 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:48:59.833574269Z" level=error msg="ContainerStatus for \"12613d1687ffb354fff14850c8c78b8948a0727abbffb228342591923b8c9f83\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"12613d1687ffb354fff14850c8c78b8948a0727abbffb228342591923b8c9f83\": not found"
	Aug 13 03:48:59 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:48:59.865752433Z" level=info msg="TearDown network for sandbox \"82923a3fd96f8bbce913558bf3635f6e270760ff9a9a85d91e84d00493f3d204\" successfully"
	Aug 13 03:48:59 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:48:59.865800252Z" level=info msg="StopPodSandbox for \"82923a3fd96f8bbce913558bf3635f6e270760ff9a9a85d91e84d00493f3d204\" returns successfully"
	Aug 13 03:49:00 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:49:00.387295497Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:nginx,Uid:15d8912a-aaaa-4e7f-9212-a8819a810920,Namespace:default,Attempt:0,}"
	Aug 13 03:49:00 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:49:00.459203371Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fc29ae20a4c4f8cf7a995bda670aa1460adb9311b500129a983c7b85a3a070d4 pid=18595
	Aug 13 03:49:00 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:49:00.525244497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx,Uid:15d8912a-aaaa-4e7f-9212-a8819a810920,Namespace:default,Attempt:0,} returns sandbox id \"fc29ae20a4c4f8cf7a995bda670aa1460adb9311b500129a983c7b85a3a070d4\""
	Aug 13 03:49:00 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:49:00.526819422Z" level=info msg="PullImage \"nginx:alpine\""
	Aug 13 03:49:00 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:49:00.756395455Z" level=info msg="StopPodSandbox for \"82923a3fd96f8bbce913558bf3635f6e270760ff9a9a85d91e84d00493f3d204\""
	Aug 13 03:49:00 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:49:00.782236006Z" level=info msg="TearDown network for sandbox \"82923a3fd96f8bbce913558bf3635f6e270760ff9a9a85d91e84d00493f3d204\" successfully"
	Aug 13 03:49:00 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:49:00.782286754Z" level=info msg="StopPodSandbox for \"82923a3fd96f8bbce913558bf3635f6e270760ff9a9a85d91e84d00493f3d204\" returns successfully"
	Aug 13 03:49:01 addons-20210813032940-2022292 containerd[453]: time="2021-08-13T03:49:01.443023463Z" level=error msg="PullImage \"nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	
	* 
	* ==> coredns [76df34c67e4d8322520aef9bf10ede58a153c61504ac00600249a57066b783fd] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/arm64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               addons-20210813032940-2022292
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-20210813032940-2022292
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dc1c3ca26e9449ce488a773126b8450402c94a19
	                    minikube.k8s.io/name=addons-20210813032940-2022292
	                    minikube.k8s.io/updated_at=2021_08_13T03_30_43_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-20210813032940-2022292
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 03:30:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-20210813032940-2022292
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 03:49:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 03:48:48 +0000   Fri, 13 Aug 2021 03:30:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 03:48:48 +0000   Fri, 13 Aug 2021 03:30:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 03:48:48 +0000   Fri, 13 Aug 2021 03:30:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 03:48:48 +0000   Fri, 13 Aug 2021 03:31:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-20210813032940-2022292
	Capacity:
	  cpu:                2
	  ephemeral-storage:  40474572Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033460Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  40474572Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033460Ki
	  pods:               110
	System Info:
	  Machine ID:                 80c525a0c99c4bf099c0cbf9c365b032
	  System UUID:                cd349576-1400-4f29-881c-2488bb4cb8bc
	  Boot ID:                    0b91f2d0-31de-4b03-9973-67e3d0024ffb
	  Kernel Version:             5.8.0-1041-aws
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.4.6
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m25s
	  default                     nginx                                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	  ingress-nginx               ingress-nginx-controller-59b45fb494-2m89h                100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (1%!)(MISSING)        0 (0%!)(MISSING)         18m
	  kube-system                 coredns-558bd4d5db-69x4l                                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     18m
	  kube-system                 etcd-addons-20210813032940-2022292                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 kindnet-6qhgq                                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      18m
	  kube-system                 kube-apiserver-addons-20210813032940-2022292             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-controller-manager-addons-20210813032940-2022292    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-proxy-9knsw                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-scheduler-addons-20210813032940-2022292             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 registry-5f6m6                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 registry-proxy-dg8n7                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 storage-provisioner                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  olm                         catalog-operator-75d496484d-xh6n8                        10m (0%!)(MISSING)      0 (0%!)(MISSING)      80Mi (1%!)(MISSING)        0 (0%!)(MISSING)         18m
	  olm                         olm-operator-859c88c96-whcps                             10m (0%!)(MISSING)      0 (0%!)(MISSING)      160Mi (2%!)(MISSING)       0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                970m (48%!)(MISSING)  100m (5%!)(MISSING)
	  memory             550Mi (7%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 18m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x5 over 18m)  kubelet     Node addons-20210813032940-2022292 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x4 over 18m)  kubelet     Node addons-20210813032940-2022292 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x4 over 18m)  kubelet     Node addons-20210813032940-2022292 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 18m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m                kubelet     Node addons-20210813032940-2022292 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m                kubelet     Node addons-20210813032940-2022292 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m                kubelet     Node addons-20210813032940-2022292 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 18m                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                17m                kubelet     Node addons-20210813032940-2022292 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Aug13 02:55] systemd-journald[174]: Failed to send stream file descriptor to service manager: Connection refused
	
	* 
	* ==> etcd [3c1ce4b5f6d51d1115ca194a201f0f2319ae8953a147cd7517cafbb1fa677e11] <==
	* 2021-08-13 03:45:32.904153 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:45:34.457912 I | mvcc: store.index: compact 1994
	2021-08-13 03:45:34.471785 I | mvcc: finished scheduled compaction at 1994 (took 13.3163ms)
	2021-08-13 03:45:42.904184 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:45:52.903474 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:46:02.904046 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:46:12.903848 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:46:22.903919 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:46:32.904124 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:46:42.903688 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:46:52.904167 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:47:02.903719 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:47:12.903949 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:47:22.903495 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:47:32.903177 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:47:42.903675 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:47:52.903443 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:48:02.904107 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:48:12.903458 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:48:22.904250 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:48:32.904138 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:48:42.903334 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:48:52.903326 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:49:02.903780 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:49:12.903868 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  03:49:12 up 13:31,  0 users,  load average: 0.41, 0.40, 1.18
	Linux addons-20210813032940-2022292 5.8.0-1041-aws #43~20.04.1-Ubuntu SMP Thu Jul 15 11:03:27 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [e34ccd127601990a58a3b0bf970ea62cbd1f776d1e31437a7826d21db94646c3] <==
	* I0813 03:45:34.376645       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 03:45:34.376775       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 03:46:06.310517       1 client.go:360] parsed scheme: "passthrough"
	I0813 03:46:06.310558       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 03:46:06.310567       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 03:46:39.600534       1 client.go:360] parsed scheme: "passthrough"
	I0813 03:46:39.600664       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 03:46:39.600679       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 03:47:21.941235       1 client.go:360] parsed scheme: "passthrough"
	I0813 03:47:21.941277       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 03:47:21.941380       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 03:47:56.189576       1 client.go:360] parsed scheme: "passthrough"
	I0813 03:47:56.189615       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 03:47:56.189623       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 03:48:16.238407       1 controller.go:611] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0813 03:48:32.684312       1 client.go:360] parsed scheme: "passthrough"
	I0813 03:48:32.684745       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 03:48:32.684767       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0813 03:48:54.115512       1 cacher.go:148] Terminating all watchers from cacher *unstructured.Unstructured
	W0813 03:48:54.214807       1 cacher.go:148] Terminating all watchers from cacher *unstructured.Unstructured
	W0813 03:48:54.218625       1 cacher.go:148] Terminating all watchers from cacher *unstructured.Unstructured
	I0813 03:48:59.700255       1 controller.go:611] quota admission added evaluator for: ingresses.networking.k8s.io
	I0813 03:49:09.310642       1 client.go:360] parsed scheme: "passthrough"
	I0813 03:49:09.310680       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 03:49:09.310769       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [fb47330aab572a8f0160ba80c3c0231858b45b60f6be68e94341c50ea5e9a377] <==
	* I0813 03:48:46.438532       1 stateful_set.go:419] StatefulSet has been deleted kube-system/csi-hostpath-provisioner
	I0813 03:48:46.468391       1 stateful_set.go:419] StatefulSet has been deleted kube-system/csi-hostpath-resizer
	I0813 03:48:46.531389       1 stateful_set.go:419] StatefulSet has been deleted kube-system/csi-hostpath-snapshotter
	I0813 03:48:48.986344       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-5b9e6b3c-5424-468e-895b-62a61c88806f" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^4cc7bd8b-fbe9-11eb-9998-86ba3f0d9b54") on node "addons-20210813032940-2022292" 
	I0813 03:48:48.989823       1 operation_generator.go:1483] Verified volume is safe to detach for volume "pvc-5b9e6b3c-5424-468e-895b-62a61c88806f" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^4cc7bd8b-fbe9-11eb-9998-86ba3f0d9b54") on node "addons-20210813032940-2022292" 
	I0813 03:48:49.539227       1 operation_generator.go:483] DetachVolume.Detach succeeded for volume "pvc-5b9e6b3c-5424-468e-895b-62a61c88806f" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^4cc7bd8b-fbe9-11eb-9998-86ba3f0d9b54") on node "addons-20210813032940-2022292" 
	E0813 03:48:54.116894       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 03:48:54.215990       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 03:48:54.219728       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 03:48:55.252510       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 03:48:55.282477       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 03:48:55.510540       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 03:48:57.161271       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0813 03:48:57.175557       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I0813 03:48:57.175645       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 03:48:57.206231       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0813 03:48:57.206380       1 shared_informer.go:247] Caches are synced for garbage collector 
	E0813 03:48:57.735201       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 03:48:58.566122       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 03:49:00.507413       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 03:49:01.350263       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 03:49:03.154230       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 03:49:10.307301       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 03:49:10.399735       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0813 03:49:10.510769       1 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [b57e0dbb56f139a4e11ed434de0567755dd4ffdb08231f247c678c04a27d72ea] <==
	* I0813 03:30:58.916090       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0813 03:30:58.916147       1 server_others.go:140] Detected node IP 192.168.49.2
	W0813 03:30:58.916169       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0813 03:30:59.027458       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0813 03:30:59.027492       1 server_others.go:212] Using iptables Proxier.
	I0813 03:30:59.027502       1 server_others.go:219] creating dualStackProxier for iptables.
	W0813 03:30:59.027515       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0813 03:30:59.027867       1 server.go:643] Version: v1.21.3
	I0813 03:30:59.037124       1 config.go:315] Starting service config controller
	I0813 03:30:59.037136       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 03:30:59.037153       1 config.go:224] Starting endpoint slice config controller
	I0813 03:30:59.037156       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 03:30:59.040531       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 03:30:59.051028       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 03:30:59.141975       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0813 03:30:59.142026       1 shared_informer.go:247] Caches are synced for service config 
	W0813 03:36:56.043703       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 03:45:52.045929       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	
	* 
	* ==> kube-scheduler [ecb0d384c34ed158fd62ade4f570a1ed8cfaca3ee6c53ab12a2de2f06720576b] <==
	* W0813 03:30:39.532366       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0813 03:30:39.532414       1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0813 03:30:39.532440       1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0813 03:30:39.532453       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0813 03:30:39.630521       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 03:30:39.630629       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 03:30:39.634320       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0813 03:30:39.634682       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0813 03:30:39.656905       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 03:30:39.657167       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 03:30:39.657219       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 03:30:39.657322       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 03:30:39.657373       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 03:30:39.657425       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 03:30:39.657469       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 03:30:39.657522       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 03:30:39.657586       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 03:30:39.657632       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 03:30:39.657673       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 03:30:39.657783       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 03:30:39.660525       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 03:30:39.673014       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 03:30:40.516857       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 03:30:40.593078       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0813 03:30:40.931616       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 03:29:47 UTC, end at Fri 2021-08-13 03:49:13 UTC. --
	Aug 13 03:48:59 addons-20210813032940-2022292 kubelet[1185]: I0813 03:48:59.834659    1185 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:12613d1687ffb354fff14850c8c78b8948a0727abbffb228342591923b8c9f83} err="failed to get container status \"12613d1687ffb354fff14850c8c78b8948a0727abbffb228342591923b8c9f83\": rpc error: code = NotFound desc = an error occurred when try to find container \"12613d1687ffb354fff14850c8c78b8948a0727abbffb228342591923b8c9f83\": not found"
	Aug 13 03:48:59 addons-20210813032940-2022292 kubelet[1185]: I0813 03:48:59.841146    1185 scope.go:111] "RemoveContainer" containerID="dfbff09d21c8207a6be16d184937fac82ac6502f707c212889ae58cd3889bb4f"
	Aug 13 03:48:59 addons-20210813032940-2022292 kubelet[1185]: E0813 03:48:59.841489    1185 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-xh6n8_olm(2a58a6fd-48ea-44a7-884d-f814b730c87a)\"" pod="olm/catalog-operator-75d496484d-xh6n8" podUID=2a58a6fd-48ea-44a7-884d-f814b730c87a
	Aug 13 03:48:59 addons-20210813032940-2022292 kubelet[1185]: I0813 03:48:59.919263    1185 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/985bccb5-7c0b-4df0-91ce-0cd5e67a9688-tmp-dir\") pod \"985bccb5-7c0b-4df0-91ce-0cd5e67a9688\" (UID: \"985bccb5-7c0b-4df0-91ce-0cd5e67a9688\") "
	Aug 13 03:48:59 addons-20210813032940-2022292 kubelet[1185]: I0813 03:48:59.919320    1185 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrdpr\" (UniqueName: \"kubernetes.io/projected/985bccb5-7c0b-4df0-91ce-0cd5e67a9688-kube-api-access-nrdpr\") pod \"985bccb5-7c0b-4df0-91ce-0cd5e67a9688\" (UID: \"985bccb5-7c0b-4df0-91ce-0cd5e67a9688\") "
	Aug 13 03:48:59 addons-20210813032940-2022292 kubelet[1185]: W0813 03:48:59.919446    1185 empty_dir.go:520] Warning: Failed to clear quota on /var/lib/kubelet/pods/985bccb5-7c0b-4df0-91ce-0cd5e67a9688/volumes/kubernetes.io~empty-dir/tmp-dir: clearQuota called, but quotas disabled
	Aug 13 03:48:59 addons-20210813032940-2022292 kubelet[1185]: I0813 03:48:59.919559    1185 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/985bccb5-7c0b-4df0-91ce-0cd5e67a9688-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "985bccb5-7c0b-4df0-91ce-0cd5e67a9688" (UID: "985bccb5-7c0b-4df0-91ce-0cd5e67a9688"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Aug 13 03:48:59 addons-20210813032940-2022292 kubelet[1185]: I0813 03:48:59.922667    1185 operation_generator.go:829] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/985bccb5-7c0b-4df0-91ce-0cd5e67a9688-kube-api-access-nrdpr" (OuterVolumeSpecName: "kube-api-access-nrdpr") pod "985bccb5-7c0b-4df0-91ce-0cd5e67a9688" (UID: "985bccb5-7c0b-4df0-91ce-0cd5e67a9688"). InnerVolumeSpecName "kube-api-access-nrdpr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 13 03:49:00 addons-20210813032940-2022292 kubelet[1185]: I0813 03:49:00.020066    1185 reconciler.go:319] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/985bccb5-7c0b-4df0-91ce-0cd5e67a9688-tmp-dir\") on node \"addons-20210813032940-2022292\" DevicePath \"\""
	Aug 13 03:49:00 addons-20210813032940-2022292 kubelet[1185]: I0813 03:49:00.020105    1185 reconciler.go:319] "Volume detached for volume \"kube-api-access-nrdpr\" (UniqueName: \"kubernetes.io/projected/985bccb5-7c0b-4df0-91ce-0cd5e67a9688-kube-api-access-nrdpr\") on node \"addons-20210813032940-2022292\" DevicePath \"\""
	Aug 13 03:49:00 addons-20210813032940-2022292 kubelet[1185]: I0813 03:49:00.071285    1185 topology_manager.go:187] "Topology Admit Handler"
	Aug 13 03:49:00 addons-20210813032940-2022292 kubelet[1185]: W0813 03:49:00.085022    1185 container.go:586] Failed to update stats for container "/kubepods/besteffort/pod15d8912a-aaaa-4e7f-9212-a8819a810920": /sys/fs/cgroup/cpuset/kubepods/besteffort/pod15d8912a-aaaa-4e7f-9212-a8819a810920/cpuset.mems found to be empty, continuing to push stats
	Aug 13 03:49:00 addons-20210813032940-2022292 kubelet[1185]: I0813 03:49:00.221135    1185 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6rkg\" (UniqueName: \"kubernetes.io/projected/15d8912a-aaaa-4e7f-9212-a8819a810920-kube-api-access-n6rkg\") pod \"nginx\" (UID: \"15d8912a-aaaa-4e7f-9212-a8819a810920\") "
	Aug 13 03:49:01 addons-20210813032940-2022292 kubelet[1185]: E0813 03:49:01.443281    1185 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="nginx:alpine"
	Aug 13 03:49:01 addons-20210813032940-2022292 kubelet[1185]: E0813 03:49:01.443332    1185 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="nginx:alpine"
	Aug 13 03:49:01 addons-20210813032940-2022292 kubelet[1185]: E0813 03:49:01.443416    1185 kuberuntime_manager.go:864] container &Container{Name:nginx,Image:nginx:alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n6rkg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod nginx_default(15d8912a-aaaa-4e7f-9212-a8819a810920): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "d
ocker.io/library/nginx:alpine": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Aug 13 03:49:01 addons-20210813032940-2022292 kubelet[1185]: E0813 03:49:01.443463    1185 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID=15d8912a-aaaa-4e7f-9212-a8819a810920
	Aug 13 03:49:01 addons-20210813032940-2022292 kubelet[1185]: E0813 03:49:01.764419    1185 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:alpine\\\"\"" pod="default/nginx" podUID=15d8912a-aaaa-4e7f-9212-a8819a810920
	Aug 13 03:49:08 addons-20210813032940-2022292 kubelet[1185]: E0813 03:49:08.355098    1185 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/besteffort/pod15d8912a-aaaa-4e7f-9212-a8819a810920\": RecentStats: unable to find data in memory cache]"
	Aug 13 03:49:09 addons-20210813032940-2022292 kubelet[1185]: I0813 03:49:09.840491    1185 scope.go:111] "RemoveContainer" containerID="5e5d9abdcdbf491571a80ec398090a2e9db048e0414e9d47e2678186d7788c72"
	Aug 13 03:49:09 addons-20210813032940-2022292 kubelet[1185]: E0813 03:49:09.840865    1185 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=olm-operator pod=olm-operator-859c88c96-whcps_olm(9dfb17b5-db48-44a1-8daf-33ce6de73034)\"" pod="olm/olm-operator-859c88c96-whcps" podUID=9dfb17b5-db48-44a1-8daf-33ce6de73034
	Aug 13 03:49:12 addons-20210813032940-2022292 kubelet[1185]: I0813 03:49:12.840551    1185 scope.go:111] "RemoveContainer" containerID="dfbff09d21c8207a6be16d184937fac82ac6502f707c212889ae58cd3889bb4f"
	Aug 13 03:49:12 addons-20210813032940-2022292 kubelet[1185]: E0813 03:49:12.840909    1185 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=catalog-operator pod=catalog-operator-75d496484d-xh6n8_olm(2a58a6fd-48ea-44a7-884d-f814b730c87a)\"" pod="olm/catalog-operator-75d496484d-xh6n8" podUID=2a58a6fd-48ea-44a7-884d-f814b730c87a
	Aug 13 03:49:12 addons-20210813032940-2022292 kubelet[1185]: I0813 03:49:12.841092    1185 scope.go:111] "RemoveContainer" containerID="9e12a8c2bdf148fa02bf682b582c4263d299d599d5c7760556fb4b568128ba59"
	Aug 13 03:49:12 addons-20210813032940-2022292 kubelet[1185]: E0813 03:49:12.841280    1185 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=registry-proxy pod=registry-proxy-dg8n7_kube-system(3af763b1-1ec2-4a5b-b1c6-9f9a9c21d031)\"" pod="kube-system/registry-proxy-dg8n7" podUID=3af763b1-1ec2-4a5b-b1c6-9f9a9c21d031
	
	* 
	* ==> storage-provisioner [f251119960206b400f3c5fa69a1475ae4017f1585aedfcae1ba3a5e41c29eaca] <==
	* I0813 03:31:53.616300       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 03:31:53.654387       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 03:31:53.654428       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 03:31:53.684054       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 03:31:53.688430       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-20210813032940-2022292_487120fb-5274-456a-8b7e-33f90e734a44!
	I0813 03:31:53.696297       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e7447bc9-2c2e-4fe9-978d-7328239a1c68", APIVersion:"v1", ResourceVersion:"1029", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-20210813032940-2022292_487120fb-5274-456a-8b7e-33f90e734a44 became leader
	I0813 03:31:53.792536       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-20210813032940-2022292_487120fb-5274-456a-8b7e-33f90e734a44!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-20210813032940-2022292 -n addons-20210813032940-2022292
helpers_test.go:262: (dbg) Run:  kubectl --context addons-20210813032940-2022292 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: nginx ingress-nginx-admission-create-r7rsv ingress-nginx-admission-patch-2wdhx
helpers_test.go:273: ======> post-mortem[TestAddons/parallel/Olm]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context addons-20210813032940-2022292 describe pod nginx ingress-nginx-admission-create-r7rsv ingress-nginx-admission-patch-2wdhx
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context addons-20210813032940-2022292 describe pod nginx ingress-nginx-admission-create-r7rsv ingress-nginx-admission-patch-2wdhx: exit status 1 (96.60585ms)

                                                
                                                
-- stdout --
	Name:         nginx
	Namespace:    default
	Priority:     0
	Node:         addons-20210813032940-2022292/192.168.49.2
	Start Time:   Fri, 13 Aug 2021 03:49:00 +0000
	Labels:       run=nginx
	Annotations:  <none>
	Status:       Pending
	IP:           10.244.0.25
	IPs:
	  IP:  10.244.0.25
	Containers:
	  nginx:
	    Container ID:   
	    Image:          nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n6rkg (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  kube-api-access-n6rkg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age   From               Message
	  ----     ------     ----  ----               -------
	  Normal   Scheduled  13s   default-scheduler  Successfully assigned default/nginx to addons-20210813032940-2022292
	  Normal   Pulling    13s   kubelet            Pulling image "nginx:alpine"
	  Warning  Failed     12s   kubelet            Failed to pull image "nginx:alpine": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     12s   kubelet            Error: ErrImagePull
	  Normal   BackOff    12s   kubelet            Back-off pulling image "nginx:alpine"
	  Warning  Failed     12s   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-r7rsv" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-2wdhx" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context addons-20210813032940-2022292 describe pod nginx ingress-nginx-admission-create-r7rsv ingress-nginx-admission-patch-2wdhx: exit status 1
--- FAIL: TestAddons/parallel/Olm (732.39s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (188.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:343: "storage-provisioner" [c4795fe5-fc69-49f2-a14d-841404f48843] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005335682s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20210813035500-2022292 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20210813035500-2022292 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20210813035500-2022292 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20210813035500-2022292 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [9ba653dc-6c2f-4098-a0c5-51f489d17c03] Pending
helpers_test.go:343: "sp-pod" [9ba653dc-6c2f-4098-a0c5-51f489d17c03] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0813 03:59:45.289670 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.crt: no such file or directory
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: timed out waiting for the condition ****
functional_test_pvc_test.go:130: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-20210813035500-2022292 -n functional-20210813035500-2022292
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2021-08-13 04:01:47.03967473 +0000 UTC m=+2005.051733923
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-20210813035500-2022292 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-20210813035500-2022292 describe po sp-pod -n default:
Name:         sp-pod
Namespace:    default
Priority:     0
Node:         functional-20210813035500-2022292/192.168.49.2
Start Time:   Fri, 13 Aug 2021 03:58:46 +0000
Labels:       test=storage-provisioner
Annotations:  <none>
Status:       Pending
IP:           10.244.0.5
IPs:
IP:  10.244.0.5
Containers:
myfrontend:
Container ID:   
Image:          nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vvqcd (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-vvqcd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  3m1s                  default-scheduler  Successfully assigned default/sp-pod to functional-20210813035500-2022292
Normal   Pulling    101s (x4 over 3m)     kubelet            Pulling image "nginx"
Warning  Failed     100s (x4 over 2m59s)  kubelet            Failed to pull image "nginx": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8f335768880da6baf72b70c701002b45f4932acae8d574dedfddaf967fc3ac90: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     100s (x4 over 2m59s)  kubelet            Error: ErrImagePull
Warning  Failed     76s (x6 over 2m59s)   kubelet            Error: ImagePullBackOff
Normal   BackOff    63s (x7 over 2m59s)   kubelet            Back-off pulling image "nginx"
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-20210813035500-2022292 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) Non-zero exit: kubectl --context functional-20210813035500-2022292 logs sp-pod -n default: exit status 1 (129.064203ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:130: kubectl --context functional-20210813035500-2022292 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: timed out waiting for the condition
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect functional-20210813035500-2022292
helpers_test.go:236: (dbg) docker inspect functional-20210813035500-2022292:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "40ddb22afcf4669616366421d874bb89dfb70f24c4e69ba25f5839e31a94ce25",
	        "Created": "2021-08-13T03:55:01.62623516Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2049075,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T03:55:02.04660523Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ba5ae658d5b3f017bdb597cc46a1912d5eed54239e31b777788d204fdcbc4445",
	        "ResolvConfPath": "/var/lib/docker/containers/40ddb22afcf4669616366421d874bb89dfb70f24c4e69ba25f5839e31a94ce25/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/40ddb22afcf4669616366421d874bb89dfb70f24c4e69ba25f5839e31a94ce25/hostname",
	        "HostsPath": "/var/lib/docker/containers/40ddb22afcf4669616366421d874bb89dfb70f24c4e69ba25f5839e31a94ce25/hosts",
	        "LogPath": "/var/lib/docker/containers/40ddb22afcf4669616366421d874bb89dfb70f24c4e69ba25f5839e31a94ce25/40ddb22afcf4669616366421d874bb89dfb70f24c4e69ba25f5839e31a94ce25-json.log",
	        "Name": "/functional-20210813035500-2022292",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-20210813035500-2022292:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-20210813035500-2022292",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/50b727a5f79b54de7fec1bbce5b2db54aa2d88292a49dc22efa7df65c1b47166-init/diff:/var/lib/docker/overlay2/7eab3572859d93b266e01c53f7180a9b812a9352d6d9de9a250b7c08853896bd/diff:/var/lib/docker/overlay2/735c75d71cfc18e90e119a4cbda44b5328f80ee140097a56e4b8d56d1d73296a/diff:/var/lib/docker/overlay2/a3e21a33abd0bc635f6c01d5065127b0c6ae8648e27621bc2af8480371e0e000/diff:/var/lib/docker/overlay2/81573b84b43b2908098dbf411f4127aea8745e37aa0ee2f3bcf32f2378aef923/diff:/var/lib/docker/overlay2/633406c91e496c6ee40740050d85641e9c1f2bf787ba64a82f892910362ceeb3/diff:/var/lib/docker/overlay2/deb8d862aaef5e3fc2ec77b3f1839b07c4f6998399f4f111cd38226c004f70b0/diff:/var/lib/docker/overlay2/57b3638e691861d96d431a19402174c1139d2ff0280c08c71a81a8fcf9390e79/diff:/var/lib/docker/overlay2/6e43f99fe3b29b8ef7a4f065a75009878de2e2c2f4298c42eaf887f7602bbc6e/diff:/var/lib/docker/overlay2/cf9d28926b8190588c7af7d8b25156aee75f2abd04071b6e2a0a0fbf2e143dee/diff:/var/lib/docker/overlay2/6aa317
1af6f20f0682732cc4019152e4d5b0846e1ebda0a27c41c772e1cde011/diff:/var/lib/docker/overlay2/868a81f13eb2fedd1a1cb40eaf1c94ba3507a2ce88acff3fbbe9324b52a4b161/diff:/var/lib/docker/overlay2/162214348b4cea5219287565f6d7e0dd459b26bcc50e3db36cf72c667b547528/diff:/var/lib/docker/overlay2/9dbad12bae2f76b71152f7b4515e05d4b998ecec3e6ee896abcec7a80dcd2bea/diff:/var/lib/docker/overlay2/6cabd7857a22f00b0aba07331d6ccd89db9770531c0aa2f6fe5dd0f2cfdf0571/diff:/var/lib/docker/overlay2/d37830ed714a3f12f75bdb0787ab6a0b95fa84f6f2ba7cfce7c0088eae46490b/diff:/var/lib/docker/overlay2/d1f89b0ec8b42bfa6422a1c60a32bf10de45dc549f369f5a7cab728a58edc9f6/diff:/var/lib/docker/overlay2/23f19b760877b914dfe08fbc57f540b6d7a01f94b06b51f27fd6b0307358f0c7/diff:/var/lib/docker/overlay2/a5a77daab231d8d9f6bccde006a207ac55eba70f1221af6acf584668b6732875/diff:/var/lib/docker/overlay2/8d8735d77324b45253a6a19c95ccc69efbb75db0817acd436b005907edf2edcf/diff:/var/lib/docker/overlay2/a7baa651956578e18a5f1b4650eb08a3fde481426f62eca9488d43b89516af4a/diff:/var/lib/d
ocker/overlay2/bce892b3b410ea92f44fedfdc2ee2fa21cfd1fb09da0f3f710f4127436dee1da/diff:/var/lib/docker/overlay2/5fd9b1d93e98bad37f9fb94802b81ef99b54fe312c33006d1efe3e0a4d018218/diff:/var/lib/docker/overlay2/4fa01f36ea63b13ec54182dc384831ff6ba4af27e4e0af13a679984676a4444c/diff:/var/lib/docker/overlay2/63fcd873b6d3120225858a1625cd3b62111df43d3ee0a5fc67083b6912d73a0b/diff:/var/lib/docker/overlay2/2a89e5c9c4b59c0940b10344a4b9bcc69aa162cbdaff6b115404618622a39bf7/diff:/var/lib/docker/overlay2/f08c2886bdfdaf347184cfc06f22457c321676b0bed884791f82f2e3871b640d/diff:/var/lib/docker/overlay2/2f28445803213dc1a6a1b2c687d83ad65dbc018184c663d1f55aa1e8ba26c71c/diff:/var/lib/docker/overlay2/b380dc70af7cf929aaac54e718efbf169fc3994906ab4c15442ddcb1b9973044/diff:/var/lib/docker/overlay2/78fc6ffaa10b2fbce9cefb40ac36aad6ac1d9d90eb27a39dc3316a9c7925b6e9/diff:/var/lib/docker/overlay2/14ee7ddeeb1d52f6956390ca75ff1c67feb8f463a7590e4e021a61251ed42ace/diff:/var/lib/docker/overlay2/99b8cd45c95f310665f0002ff1e8a6932c40fe872e3daa332d0b6f0cc41
f09f7/diff:/var/lib/docker/overlay2/efc742edfe683b14be0e72910049a54bf7b14ac798aa52a5e0f2839e1192b382/diff:/var/lib/docker/overlay2/d038d2ed6aff52af29d17eeb4de8728511045dbe49430059212877f1ae82f24b/diff:/var/lib/docker/overlay2/413fdf0e0da33dff95cacfd58fb4d7eb00b56c1777905c5671426293e1236f21/diff:/var/lib/docker/overlay2/88c5007e3d3e219079cebf81af5c22026c5923305801eacb5affe25b84906e7f/diff:/var/lib/docker/overlay2/e989119af87381d107830638584e78f0bf616a31754948372e177ffcdfb821fb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/50b727a5f79b54de7fec1bbce5b2db54aa2d88292a49dc22efa7df65c1b47166/merged",
	                "UpperDir": "/var/lib/docker/overlay2/50b727a5f79b54de7fec1bbce5b2db54aa2d88292a49dc22efa7df65c1b47166/diff",
	                "WorkDir": "/var/lib/docker/overlay2/50b727a5f79b54de7fec1bbce5b2db54aa2d88292a49dc22efa7df65c1b47166/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-20210813035500-2022292",
	                "Source": "/var/lib/docker/volumes/functional-20210813035500-2022292/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-20210813035500-2022292",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-20210813035500-2022292",
	                "name.minikube.sigs.k8s.io": "functional-20210813035500-2022292",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "957064229dee34cd7d17e24cb7f2ec14f262c7d74d1b22366870866a1d245ccb",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50813"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50812"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50809"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50811"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50810"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/957064229dee",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-20210813035500-2022292": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "40ddb22afcf4",
	                        "functional-20210813035500-2022292"
	                    ],
	                    "NetworkID": "0df155b42a67e3b203e9c187b82da53ab04db685071a14f5206d6c61765935d3",
	                    "EndpointID": "7c6ee5c56dc5e02a72410da84edb5c3632c3a7a602b365c2533503cb5fd679c5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-20210813035500-2022292 -n functional-20210813035500-2022292
helpers_test.go:245: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p functional-20210813035500-2022292 logs -n 25: (1.061056379s)
helpers_test.go:253: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------------------|-----------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                                Args                                |              Profile              |  User   | Version |          Start Time           |           End Time            |
	|---------|--------------------------------------------------------------------|-----------------------------------|---------|---------|-------------------------------|-------------------------------|
	| -p      | functional-20210813035500-2022292                                  | functional-20210813035500-2022292 | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:58:23 UTC | Fri, 13 Aug 2021 03:58:23 UTC |
	|         | config unset cpus                                                  |                                   |         |         |                               |                               |
	| -p      | functional-20210813035500-2022292                                  | functional-20210813035500-2022292 | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:58:23 UTC | Fri, 13 Aug 2021 03:58:24 UTC |
	|         | image ls                                                           |                                   |         |         |                               |                               |
	| -p      | functional-20210813035500-2022292 image load                       | functional-20210813035500-2022292 | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:58:24 UTC | Fri, 13 Aug 2021 03:58:25 UTC |
	|         | /home/jenkins/workspace/Docker_Linux_containerd_arm64/busybox.tar  |                                   |         |         |                               |                               |
	| ssh     | -p                                                                 | functional-20210813035500-2022292 | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:58:25 UTC | Fri, 13 Aug 2021 03:58:25 UTC |
	|         | functional-20210813035500-2022292                                  |                                   |         |         |                               |                               |
	|         | -- sudo crictl images                                              |                                   |         |         |                               |                               |
	| -p      | functional-20210813035500-2022292 image load                       | functional-20210813035500-2022292 | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:58:26 UTC | Fri, 13 Aug 2021 03:58:26 UTC |
	|         | docker.io/library/busybox:remove-functional-20210813035500-2022292 |                                   |         |         |                               |                               |
	| -p      | functional-20210813035500-2022292 image rm                         | functional-20210813035500-2022292 | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:58:27 UTC | Fri, 13 Aug 2021 03:58:27 UTC |
	|         | docker.io/library/busybox:remove-functional-20210813035500-2022292 |                                   |         |         |                               |                               |
	| -p      | functional-20210813035500-2022292 image build -t                   | functional-20210813035500-2022292 | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:58:24 UTC | Fri, 13 Aug 2021 03:58:27 UTC |
	|         | localhost/my-image:functional-20210813035500-2022292               |                                   |         |         |                               |                               |
	|         | testdata/build                                                     |                                   |         |         |                               |                               |
	| ssh     | -p                                                                 | functional-20210813035500-2022292 | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:58:27 UTC | Fri, 13 Aug 2021 03:58:27 UTC |
	|         | functional-20210813035500-2022292                                  |                                   |         |         |                               |                               |
	|         | -- sudo crictl images                                              |                                   |         |         |                               |                               |
	| ssh     | -p functional-20210813035500-2022292                               | functional-20210813035500-2022292 | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:58:27 UTC | Fri, 13 Aug 2021 03:58:27 UTC |
	|         | -- sudo crictl inspecti                                            |                                   |         |         |                               |                               |
	|         | localhost/my-image:functional-20210813035500-2022292               |                                   |         |         |                               |                               |
	| -p      | functional-20210813035500-2022292                                  | functional-20210813035500-2022292 | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:58:28 UTC | Fri, 13 Aug 2021 03:58:28 UTC |
	|         | ssh sudo cat                                                       |                                   |         |         |                               |                               |
	|         | /etc/ssl/certs/2022292.pem                                         |                                   |         |         |                               |                               |
	| -p      | functional-20210813035500-2022292                                  | functional-20210813035500-2022292 | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:58:28 UTC | Fri, 13 Aug 2021 03:58:28 UTC |
	|         | ssh sudo cat                                                       |                                   |         |         |                               |                               |
	|         | /usr/share/ca-certificates/2022292.pem                             |                                   |         |         |                               |                               |
	| -p      | functional-20210813035500-2022292                                  | functional-20210813035500-2022292 | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:58:28 UTC | Fri, 13 Aug 2021 03:58:28 UTC |
	|         | ssh sudo cat                                                       |                                   |         |         |                               |                               |
	|         | /etc/ssl/certs/51391683.0                                          |                                   |         |         |                               |                               |
	| -p      | functional-20210813035500-2022292 image load                       | functional-20210813035500-2022292 | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:58:28 UTC | Fri, 13 Aug 2021 03:58:29 UTC |
	|         | docker.io/library/busybox:load-functional-20210813035500-2022292   |                                   |         |         |                               |                               |
	| -p      | functional-20210813035500-2022292                                  | functional-20210813035500-2022292 | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:58:29 UTC | Fri, 13 Aug 2021 03:58:29 UTC |
	|         | ssh sudo cat                                                       |                                   |         |         |                               |                               |
	|         | /etc/ssl/certs/20222922.pem                                        |                                   |         |         |                               |                               |
	| ssh     | -p functional-20210813035500-2022292 -- sudo crictl inspecti       | functional-20210813035500-2022292 | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:58:29 UTC | Fri, 13 Aug 2021 03:58:29 UTC |
	|         | docker.io/library/busybox:load-functional-20210813035500-2022292   |                                   |         |         |                               |                               |
	| -p      | functional-20210813035500-2022292                                  | functional-20210813035500-2022292 | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:58:29 UTC | Fri, 13 Aug 2021 03:58:29 UTC |
	|         | ssh sudo cat                                                       |                                   |         |         |                               |                               |
	|         | /usr/share/ca-certificates/20222922.pem                            |                                   |         |         |                               |                               |
	| -p      | functional-20210813035500-2022292                                  | functional-20210813035500-2022292 | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:58:29 UTC | Fri, 13 Aug 2021 03:58:29 UTC |
	|         | ssh sudo cat                                                       |                                   |         |         |                               |                               |
	|         | /etc/ssl/certs/3ec20f2e.0                                          |                                   |         |         |                               |                               |
	| -p      | functional-20210813035500-2022292                                  | functional-20210813035500-2022292 | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:58:30 UTC | Fri, 13 Aug 2021 03:58:30 UTC |
	|         | cp testdata/cp-test.txt                                            |                                   |         |         |                               |                               |
	|         | /home/docker/cp-test.txt                                           |                                   |         |         |                               |                               |
	| -p      | functional-20210813035500-2022292                                  | functional-20210813035500-2022292 | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:58:30 UTC | Fri, 13 Aug 2021 03:58:30 UTC |
	|         | ssh sudo cat                                                       |                                   |         |         |                               |                               |
	|         | /home/docker/cp-test.txt                                           |                                   |         |         |                               |                               |
	| -p      | functional-20210813035500-2022292                                  | functional-20210813035500-2022292 | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:58:30 UTC | Fri, 13 Aug 2021 03:58:30 UTC |
	|         | ssh echo hello                                                     |                                   |         |         |                               |                               |
	| -p      | functional-20210813035500-2022292                                  | functional-20210813035500-2022292 | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:58:30 UTC | Fri, 13 Aug 2021 03:58:31 UTC |
	|         | ssh cat /etc/hostname                                              |                                   |         |         |                               |                               |
	| -p      | functional-20210813035500-2022292                                  | functional-20210813035500-2022292 | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:58:39 UTC | Fri, 13 Aug 2021 03:58:40 UTC |
	|         | service list                                                       |                                   |         |         |                               |                               |
	| -p      | functional-20210813035500-2022292                                  | functional-20210813035500-2022292 | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:58:40 UTC | Fri, 13 Aug 2021 03:58:40 UTC |
	|         | service --namespace=default                                        |                                   |         |         |                               |                               |
	|         | --https --url hello-node                                           |                                   |         |         |                               |                               |
	| -p      | functional-20210813035500-2022292                                  | functional-20210813035500-2022292 | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:58:40 UTC | Fri, 13 Aug 2021 03:58:40 UTC |
	|         | service hello-node --url                                           |                                   |         |         |                               |                               |
	|         | --format={{.IP}}                                                   |                                   |         |         |                               |                               |
	| -p      | functional-20210813035500-2022292                                  | functional-20210813035500-2022292 | jenkins | v1.22.0 | Fri, 13 Aug 2021 03:58:40 UTC | Fri, 13 Aug 2021 03:58:41 UTC |
	|         | service hello-node --url                                           |                                   |         |         |                               |                               |
	|---------|--------------------------------------------------------------------|-----------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 03:57:32
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.16.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 03:57:32.033343 2052796 out.go:298] Setting OutFile to fd 1 ...
	I0813 03:57:32.033414 2052796 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 03:57:32.033417 2052796 out.go:311] Setting ErrFile to fd 2...
	I0813 03:57:32.033420 2052796 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 03:57:32.033546 2052796 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0813 03:57:32.033782 2052796 out.go:305] Setting JSON to false
	I0813 03:57:32.034640 2052796 start.go:111] hostinfo: {"hostname":"ip-172-31-30-239","uptime":49196,"bootTime":1628777856,"procs":237,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0813 03:57:32.034700 2052796 start.go:121] virtualization:  
	I0813 03:57:32.037994 2052796 out.go:177] * [functional-20210813035500-2022292] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0813 03:57:32.038078 2052796 notify.go:169] Checking for updates...
	I0813 03:57:32.040599 2052796 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 03:57:32.042846 2052796 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 03:57:32.045170 2052796 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0813 03:57:32.047115 2052796 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0813 03:57:32.047542 2052796 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 03:57:32.096704 2052796 docker.go:132] docker version: linux-20.10.8
	I0813 03:57:32.096786 2052796 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 03:57:32.207911 2052796 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:19 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-13 03:57:32.146536174 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0813 03:57:32.208013 2052796 docker.go:244] overlay module found
	I0813 03:57:32.211359 2052796 out.go:177] * Using the docker driver based on existing profile
	I0813 03:57:32.211379 2052796 start.go:278] selected driver: docker
	I0813 03:57:32.211388 2052796 start.go:751] validating driver "docker" against &{Name:functional-20210813035500-2022292 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:functional-20210813035500-2022292 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-g
luster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 03:57:32.211506 2052796 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0813 03:57:32.211615 2052796 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 03:57:32.294949 2052796 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:19 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-13 03:57:32.239786068 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0813 03:57:32.295288 2052796 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 03:57:32.295303 2052796 cni.go:93] Creating CNI manager for ""
	I0813 03:57:32.295308 2052796 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 03:57:32.295317 2052796 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0813 03:57:32.295321 2052796 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0813 03:57:32.295326 2052796 start_flags.go:277] config:
	{Name:functional-20210813035500-2022292 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:functional-20210813035500-2022292 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containe
rd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-g
luster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 03:57:32.298007 2052796 out.go:177] * Starting control plane node functional-20210813035500-2022292 in cluster functional-20210813035500-2022292
	I0813 03:57:32.298040 2052796 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0813 03:57:32.300108 2052796 out.go:177] * Pulling base image ...
	I0813 03:57:32.300133 2052796 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 03:57:32.300189 2052796 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4
	I0813 03:57:32.300197 2052796 cache.go:56] Caching tarball of preloaded images
	I0813 03:57:32.300373 2052796 preload.go:173] Found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0813 03:57:32.300386 2052796 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0813 03:57:32.300443 2052796 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0813 03:57:32.300691 2052796 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813035500-2022292/config.json ...
	I0813 03:57:32.334696 2052796 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0813 03:57:32.334714 2052796 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0813 03:57:32.334730 2052796 cache.go:205] Successfully downloaded all kic artifacts
	I0813 03:57:32.334753 2052796 start.go:313] acquiring machines lock for functional-20210813035500-2022292: {Name:mk0009ccfab0c0a03f4782e2164b750a6f27ed1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 03:57:32.334839 2052796 start.go:317] acquired machines lock for "functional-20210813035500-2022292" in 66.568µs
	I0813 03:57:32.334857 2052796 start.go:93] Skipping create...Using existing machine configuration
	I0813 03:57:32.334862 2052796 fix.go:55] fixHost starting: 
	I0813 03:57:32.335160 2052796 cli_runner.go:115] Run: docker container inspect functional-20210813035500-2022292 --format={{.State.Status}}
	I0813 03:57:32.367174 2052796 fix.go:108] recreateIfNeeded on functional-20210813035500-2022292: state=Running err=<nil>
	W0813 03:57:32.367193 2052796 fix.go:134] unexpected machine state, will restart: <nil>
	I0813 03:57:32.369447 2052796 out.go:177] * Updating the running docker "functional-20210813035500-2022292" container ...
	I0813 03:57:32.369468 2052796 machine.go:88] provisioning docker machine ...
	I0813 03:57:32.369480 2052796 ubuntu.go:169] provisioning hostname "functional-20210813035500-2022292"
	I0813 03:57:32.369528 2052796 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210813035500-2022292
	I0813 03:57:32.399613 2052796 main.go:130] libmachine: Using SSH client type: native
	I0813 03:57:32.399805 2052796 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50813 <nil> <nil>}
	I0813 03:57:32.399817 2052796 main.go:130] libmachine: About to run SSH command:
	sudo hostname functional-20210813035500-2022292 && echo "functional-20210813035500-2022292" | sudo tee /etc/hostname
	I0813 03:57:32.518889 2052796 main.go:130] libmachine: SSH cmd err, output: <nil>: functional-20210813035500-2022292
	
	I0813 03:57:32.518940 2052796 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210813035500-2022292
	I0813 03:57:32.551554 2052796 main.go:130] libmachine: Using SSH client type: native
	I0813 03:57:32.551697 2052796 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 50813 <nil> <nil>}
	I0813 03:57:32.551718 2052796 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-20210813035500-2022292' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-20210813035500-2022292/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-20210813035500-2022292' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 03:57:32.663617 2052796 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 03:57:32.663632 2052796 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e6
89d34b/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube}
	I0813 03:57:32.663649 2052796 ubuntu.go:177] setting up certificates
	I0813 03:57:32.663657 2052796 provision.go:83] configureAuth start
	I0813 03:57:32.663716 2052796 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-20210813035500-2022292
	I0813 03:57:32.696096 2052796 provision.go:137] copyHostCerts
	I0813 03:57:32.696143 2052796 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem, removing ...
	I0813 03:57:32.696149 2052796 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem
	I0813 03:57:32.696202 2052796 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem (1123 bytes)
	I0813 03:57:32.696287 2052796 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem, removing ...
	I0813 03:57:32.696292 2052796 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem
	I0813 03:57:32.696310 2052796 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem (1679 bytes)
	I0813 03:57:32.696394 2052796 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem, removing ...
	I0813 03:57:32.696398 2052796 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem
	I0813 03:57:32.696416 2052796 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem (1078 bytes)
	I0813 03:57:32.696499 2052796 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem org=jenkins.functional-20210813035500-2022292 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube functional-20210813035500-2022292]
	I0813 03:57:33.409193 2052796 provision.go:171] copyRemoteCerts
	I0813 03:57:33.409271 2052796 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 03:57:33.409309 2052796 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210813035500-2022292
	I0813 03:57:33.440927 2052796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50813 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/functional-20210813035500-2022292/id_rsa Username:docker}
	I0813 03:57:33.522438 2052796 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 03:57:33.537228 2052796 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0813 03:57:33.552992 2052796 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 03:57:33.567757 2052796 provision.go:86] duration metric: configureAuth took 904.092005ms
	I0813 03:57:33.567767 2052796 ubuntu.go:193] setting minikube options for container-runtime
	I0813 03:57:33.567955 2052796 machine.go:91] provisioned docker machine in 1.198482066s
	I0813 03:57:33.567961 2052796 start.go:267] post-start starting for "functional-20210813035500-2022292" (driver="docker")
	I0813 03:57:33.567966 2052796 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 03:57:33.568004 2052796 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 03:57:33.568036 2052796 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210813035500-2022292
	I0813 03:57:33.598883 2052796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50813 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/functional-20210813035500-2022292/id_rsa Username:docker}
	I0813 03:57:33.682292 2052796 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 03:57:33.684738 2052796 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 03:57:33.684752 2052796 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 03:57:33.684762 2052796 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 03:57:33.684767 2052796 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 03:57:33.684773 2052796 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/addons for local assets ...
	I0813 03:57:33.684812 2052796 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files for local assets ...
	I0813 03:57:33.684886 2052796 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/20222922.pem -> 20222922.pem in /etc/ssl/certs
	I0813 03:57:33.684958 2052796 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/test/nested/copy/2022292/hosts -> hosts in /etc/test/nested/copy/2022292
	I0813 03:57:33.684989 2052796 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/2022292
	I0813 03:57:33.690624 2052796 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/20222922.pem --> /etc/ssl/certs/20222922.pem (1708 bytes)
	I0813 03:57:33.705863 2052796 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/test/nested/copy/2022292/hosts --> /etc/test/nested/copy/2022292/hosts (40 bytes)
	I0813 03:57:33.720265 2052796 start.go:270] post-start completed in 152.294007ms
	I0813 03:57:33.720309 2052796 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 03:57:33.720388 2052796 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210813035500-2022292
	I0813 03:57:33.751188 2052796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50813 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/functional-20210813035500-2022292/id_rsa Username:docker}
	I0813 03:57:33.832410 2052796 fix.go:57] fixHost completed within 1.497543091s
	I0813 03:57:33.832422 2052796 start.go:80] releasing machines lock for "functional-20210813035500-2022292", held for 1.497576427s
	I0813 03:57:33.832499 2052796 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-20210813035500-2022292
	I0813 03:57:33.869533 2052796 ssh_runner.go:149] Run: systemctl --version
	I0813 03:57:33.869571 2052796 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210813035500-2022292
	I0813 03:57:33.869610 2052796 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 03:57:33.869657 2052796 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210813035500-2022292
	I0813 03:57:33.909709 2052796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50813 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/functional-20210813035500-2022292/id_rsa Username:docker}
	I0813 03:57:33.922173 2052796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50813 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/functional-20210813035500-2022292/id_rsa Username:docker}
	I0813 03:57:33.991830 2052796 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0813 03:57:34.128920 2052796 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0813 03:57:34.138126 2052796 docker.go:153] disabling docker service ...
	I0813 03:57:34.138170 2052796 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 03:57:34.147584 2052796 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 03:57:34.156214 2052796 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 03:57:34.273224 2052796 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 03:57:34.391329 2052796 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 03:57:34.400166 2052796 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 03:57:34.411977 2052796 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5ta
yIKICAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0813 03:57:34.423980 2052796 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 03:57:34.429628 2052796 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 03:57:34.436269 2052796 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 03:57:34.552610 2052796 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 03:57:34.646473 2052796 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0813 03:57:34.646526 2052796 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 03:57:34.649945 2052796 start.go:417] Will wait 60s for crictl version
	I0813 03:57:34.649995 2052796 ssh_runner.go:149] Run: sudo crictl version
	I0813 03:57:34.676841 2052796 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-13T03:57:34Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0813 03:57:45.727210 2052796 ssh_runner.go:149] Run: sudo crictl version
	I0813 03:57:45.749464 2052796 start.go:426] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.6
	RuntimeApiVersion:  v1alpha2
	I0813 03:57:45.749506 2052796 ssh_runner.go:149] Run: containerd --version
	I0813 03:57:45.774884 2052796 ssh_runner.go:149] Run: containerd --version
	I0813 03:57:45.799123 2052796 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.6 ...
	I0813 03:57:45.799190 2052796 cli_runner.go:115] Run: docker network inspect functional-20210813035500-2022292 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 03:57:45.829346 2052796 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0813 03:57:45.834414 2052796 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0813 03:57:45.834478 2052796 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 03:57:45.834534 2052796 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 03:57:45.856985 2052796 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 03:57:45.856993 2052796 containerd.go:517] Images already preloaded, skipping extraction
	I0813 03:57:45.857029 2052796 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 03:57:45.878347 2052796 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 03:57:45.878355 2052796 cache_images.go:74] Images are preloaded, skipping loading
	I0813 03:57:45.878392 2052796 ssh_runner.go:149] Run: sudo crictl info
	I0813 03:57:45.900093 2052796 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0813 03:57:45.900118 2052796 cni.go:93] Creating CNI manager for ""
	I0813 03:57:45.900127 2052796 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 03:57:45.900136 2052796 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 03:57:45.900148 2052796 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-20210813035500-2022292 NodeName:functional-20210813035500-2022292 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:
map[]}
	I0813 03:57:45.900265 2052796 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "functional-20210813035500-2022292"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 03:57:45.900404 2052796 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=functional-20210813035500-2022292 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:functional-20210813035500-2022292 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I0813 03:57:45.900450 2052796 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 03:57:45.906502 2052796 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 03:57:45.906550 2052796 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 03:57:45.912386 2052796 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (578 bytes)
	I0813 03:57:45.923463 2052796 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 03:57:45.935978 2052796 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1933 bytes)
	I0813 03:57:45.946901 2052796 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0813 03:57:45.949436 2052796 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813035500-2022292 for IP: 192.168.49.2
	I0813 03:57:45.949469 2052796 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key
	I0813 03:57:45.949481 2052796 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key
	I0813 03:57:45.949524 2052796 certs.go:290] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813035500-2022292/client.key
	I0813 03:57:45.949537 2052796 certs.go:290] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813035500-2022292/apiserver.key.dd3b5fb2
	I0813 03:57:45.949554 2052796 certs.go:290] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813035500-2022292/proxy-client.key
	I0813 03:57:45.949640 2052796 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/2022292.pem (1338 bytes)
	W0813 03:57:45.949673 2052796 certs.go:369] ignoring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/2022292_empty.pem, impossibly tiny 0 bytes
	I0813 03:57:45.949682 2052796 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem (1675 bytes)
	I0813 03:57:45.949704 2052796 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem (1078 bytes)
	I0813 03:57:45.949724 2052796 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem (1123 bytes)
	I0813 03:57:45.949747 2052796 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem (1679 bytes)
	I0813 03:57:45.949786 2052796 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/20222922.pem (1708 bytes)
	I0813 03:57:45.950915 2052796 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813035500-2022292/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 03:57:45.965534 2052796 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813035500-2022292/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0813 03:57:45.980187 2052796 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813035500-2022292/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 03:57:45.995316 2052796 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813035500-2022292/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0813 03:57:46.010214 2052796 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 03:57:46.024446 2052796 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 03:57:46.039221 2052796 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 03:57:46.053513 2052796 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0813 03:57:46.068312 2052796 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/20222922.pem --> /usr/share/ca-certificates/20222922.pem (1708 bytes)
	I0813 03:57:46.082851 2052796 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 03:57:46.097426 2052796 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/2022292.pem --> /usr/share/ca-certificates/2022292.pem (1338 bytes)
	I0813 03:57:46.112214 2052796 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 03:57:46.122850 2052796 ssh_runner.go:149] Run: openssl version
	I0813 03:57:46.127158 2052796 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2022292.pem && ln -fs /usr/share/ca-certificates/2022292.pem /etc/ssl/certs/2022292.pem"
	I0813 03:57:46.133358 2052796 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2022292.pem
	I0813 03:57:46.135954 2052796 certs.go:416] hashing: -rw-r--r-- 1 root root 1338 Aug 13 03:55 /usr/share/ca-certificates/2022292.pem
	I0813 03:57:46.135994 2052796 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2022292.pem
	I0813 03:57:46.140277 2052796 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2022292.pem /etc/ssl/certs/51391683.0"
	I0813 03:57:46.145908 2052796 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20222922.pem && ln -fs /usr/share/ca-certificates/20222922.pem /etc/ssl/certs/20222922.pem"
	I0813 03:57:46.152033 2052796 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/20222922.pem
	I0813 03:57:46.154723 2052796 certs.go:416] hashing: -rw-r--r-- 1 root root 1708 Aug 13 03:55 /usr/share/ca-certificates/20222922.pem
	I0813 03:57:46.154754 2052796 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20222922.pem
	I0813 03:57:46.160596 2052796 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20222922.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 03:57:46.166301 2052796 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 03:57:46.172589 2052796 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 03:57:46.175270 2052796 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 13 03:30 /usr/share/ca-certificates/minikubeCA.pem
	I0813 03:57:46.175307 2052796 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 03:57:46.179523 2052796 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 03:57:46.185189 2052796 kubeadm.go:390] StartCluster: {Name:functional-20210813035500-2022292 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:functional-20210813035500-2022292 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registr
y-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 03:57:46.185295 2052796 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0813 03:57:46.185335 2052796 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 03:57:46.214337 2052796 cri.go:76] found id: "584edf842ed29d81275517f795258fdce46c4608d37a80daa0622aea955c9e26"
	I0813 03:57:46.214347 2052796 cri.go:76] found id: "26de2a043d4ba92e434cd3c2f372a85bf0127b256631befeb8d4dbdd429a82fc"
	I0813 03:57:46.214351 2052796 cri.go:76] found id: "87bdbbda825fdf2442f99d53b1daa35073722a57329f227fd972e8f5287fb679"
	I0813 03:57:46.214355 2052796 cri.go:76] found id: "2e77f564cd6a8a87291fa7a7780a7ab63b5a887e233148679ea40cce4a1487d9"
	I0813 03:57:46.214358 2052796 cri.go:76] found id: "5c910e4a92e8eb692a4276056f2c8c50a1ca29494bbac6881023d7d52b2e1386"
	I0813 03:57:46.214362 2052796 cri.go:76] found id: "b044867072c8a562fce8e5aabba2b81d9f24488637d60d13061c59439097a8bd"
	I0813 03:57:46.214366 2052796 cri.go:76] found id: "8e5670fba2a2eb79521f72949fc599980deac4d472646f753211578ca6010bcd"
	I0813 03:57:46.214370 2052796 cri.go:76] found id: "bfedc8b92e9d94a2143d41d2ad9f3caf5c24da4da0cd7a7a610c554ca5000d69"
	I0813 03:57:46.214373 2052796 cri.go:76] found id: ""
	I0813 03:57:46.214405 2052796 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0813 03:57:46.249625 2052796 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"081803f26118d4f158028edcd1a25da7ba0ef23ee75c924c3bc02b93830b02ac","pid":945,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/081803f26118d4f158028edcd1a25da7ba0ef23ee75c924c3bc02b93830b02ac","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/081803f26118d4f158028edcd1a25da7ba0ef23ee75c924c3bc02b93830b02ac/rootfs","created":"2021-08-13T03:55:49.928007699Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"081803f26118d4f158028edcd1a25da7ba0ef23ee75c924c3bc02b93830b02ac","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-functional-20210813035500-2022292_a6ba59fc04e6e404956742da5d2f35ab"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"26de2a043d4ba92e434cd3c2f372a85bf0127b256631befeb8d4dbdd429a82fc","pid":2018,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/26de2a043d4ba92e434cd3
c2f372a85bf0127b256631befeb8d4dbdd429a82fc","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/26de2a043d4ba92e434cd3c2f372a85bf0127b256631befeb8d4dbdd429a82fc/rootfs","created":"2021-08-13T03:57:02.489807862Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"4f742e964a23c86141e4b6ab853c09c7719ceead8cbcc014372fe419141bea42"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2e77f564cd6a8a87291fa7a7780a7ab63b5a887e233148679ea40cce4a1487d9","pid":1639,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2e77f564cd6a8a87291fa7a7780a7ab63b5a887e233148679ea40cce4a1487d9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2e77f564cd6a8a87291fa7a7780a7ab63b5a887e233148679ea40cce4a1487d9/rootfs","created":"2021-08-13T03:56:14.332502022Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sa
ndbox-id":"46a378659540fb860f2db272374e894e396b9a59e4362ce2c2a32ef06150a584"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"46a378659540fb860f2db272374e894e396b9a59e4362ce2c2a32ef06150a584","pid":1572,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/46a378659540fb860f2db272374e894e396b9a59e4362ce2c2a32ef06150a584","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/46a378659540fb860f2db272374e894e396b9a59e4362ce2c2a32ef06150a584/rootfs","created":"2021-08-13T03:56:14.147373685Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"46a378659540fb860f2db272374e894e396b9a59e4362ce2c2a32ef06150a584","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-wvcgv_1f769dd9-aac5-4959-a135-64c9bf26148c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4f742e964a23c86141e4b6ab853c09c7719ceead8cbcc014372fe419141bea42","pid":1926,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4f74
2e964a23c86141e4b6ab853c09c7719ceead8cbcc014372fe419141bea42","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4f742e964a23c86141e4b6ab853c09c7719ceead8cbcc014372fe419141bea42/rootfs","created":"2021-08-13T03:57:02.344862265Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"4f742e964a23c86141e4b6ab853c09c7719ceead8cbcc014372fe419141bea42","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_c4795fe5-fc69-49f2-a14d-841404f48843"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"584edf842ed29d81275517f795258fdce46c4608d37a80daa0622aea955c9e26","pid":2053,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/584edf842ed29d81275517f795258fdce46c4608d37a80daa0622aea955c9e26","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/584edf842ed29d81275517f795258fdce46c4608d37a80daa0622aea955c9e26/rootfs","created":"2021-08-13T03:57:02.540431022Z","annotations":{"io.kubernetes.cri.container-n
ame":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"ce09cc60a5088dcac1da7716e2751d0b149a6b8f0dea9f0849e403ce38e0c5c5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5c910e4a92e8eb692a4276056f2c8c50a1ca29494bbac6881023d7d52b2e1386","pid":1168,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5c910e4a92e8eb692a4276056f2c8c50a1ca29494bbac6881023d7d52b2e1386","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5c910e4a92e8eb692a4276056f2c8c50a1ca29494bbac6881023d7d52b2e1386/rootfs","created":"2021-08-13T03:55:50.292698476Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"f7398811f5c1ab32324c181b9105e0f0caefe8e34034453b4f7b5a146d1d2f36"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"87bdbbda825fdf2442f99d53b1daa35073722a57329f227fd972e8f5287fb679","pid":1680,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.tas
k/k8s.io/87bdbbda825fdf2442f99d53b1daa35073722a57329f227fd972e8f5287fb679","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/87bdbbda825fdf2442f99d53b1daa35073722a57329f227fd972e8f5287fb679/rootfs","created":"2021-08-13T03:56:14.484886395Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"d20a1118af86256b6c6a6945620ab95c052af8aa6a824697d0b0a0a9b33732a7"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8e5670fba2a2eb79521f72949fc599980deac4d472646f753211578ca6010bcd","pid":1077,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8e5670fba2a2eb79521f72949fc599980deac4d472646f753211578ca6010bcd","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8e5670fba2a2eb79521f72949fc599980deac4d472646f753211578ca6010bcd/rootfs","created":"2021-08-13T03:55:50.123146071Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"conta
iner","io.kubernetes.cri.sandbox-id":"93e8784b8c01bd693a0a843aae4677a92acd1be6930abbf08a4999917477aca8"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"93e8784b8c01bd693a0a843aae4677a92acd1be6930abbf08a4999917477aca8","pid":963,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/93e8784b8c01bd693a0a843aae4677a92acd1be6930abbf08a4999917477aca8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/93e8784b8c01bd693a0a843aae4677a92acd1be6930abbf08a4999917477aca8/rootfs","created":"2021-08-13T03:55:49.963037631Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"93e8784b8c01bd693a0a843aae4677a92acd1be6930abbf08a4999917477aca8","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-functional-20210813035500-2022292_07f53f68837b43ea9ca8e46c7fd6a7cb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b044867072c8a562fce8e5aabba2b81d9f24488637d60d13061c59439097a8bd","pid":1158,"status":"running","bundle":"/ru
n/containerd/io.containerd.runtime.v2.task/k8s.io/b044867072c8a562fce8e5aabba2b81d9f24488637d60d13061c59439097a8bd","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b044867072c8a562fce8e5aabba2b81d9f24488637d60d13061c59439097a8bd/rootfs","created":"2021-08-13T03:55:50.22635773Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"dc0376b435853ad1378172a515dd55ce7c31e457ed08793b3532339f236b39a3"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bfedc8b92e9d94a2143d41d2ad9f3caf5c24da4da0cd7a7a610c554ca5000d69","pid":1093,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bfedc8b92e9d94a2143d41d2ad9f3caf5c24da4da0cd7a7a610c554ca5000d69","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bfedc8b92e9d94a2143d41d2ad9f3caf5c24da4da0cd7a7a610c554ca5000d69/rootfs","created":"2021-08-13T03:55:50.131754463Z","annotations":{"io.kubernetes.cri.container-name":"etcd"
,"io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"081803f26118d4f158028edcd1a25da7ba0ef23ee75c924c3bc02b93830b02ac"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ce09cc60a5088dcac1da7716e2751d0b149a6b8f0dea9f0849e403ce38e0c5c5","pid":1978,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ce09cc60a5088dcac1da7716e2751d0b149a6b8f0dea9f0849e403ce38e0c5c5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ce09cc60a5088dcac1da7716e2751d0b149a6b8f0dea9f0849e403ce38e0c5c5/rootfs","created":"2021-08-13T03:57:02.416822423Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"ce09cc60a5088dcac1da7716e2751d0b149a6b8f0dea9f0849e403ce38e0c5c5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-tv6ld_fdcc7b54-72d1-4a9e-bedd-687a4e6272d1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d20a1118af86256b6c6a6945620ab95c052af8aa6a824697d0b0a0a9b33732a7","pid":1578,"status":
"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d20a1118af86256b6c6a6945620ab95c052af8aa6a824697d0b0a0a9b33732a7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d20a1118af86256b6c6a6945620ab95c052af8aa6a824697d0b0a0a9b33732a7/rootfs","created":"2021-08-13T03:56:14.192125725Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"d20a1118af86256b6c6a6945620ab95c052af8aa6a824697d0b0a0a9b33732a7","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-bnjk7_f0564b00-d495-418d-8eb3-be7440a56f15"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dc0376b435853ad1378172a515dd55ce7c31e457ed08793b3532339f236b39a3","pid":973,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc0376b435853ad1378172a515dd55ce7c31e457ed08793b3532339f236b39a3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc0376b435853ad1378172a515dd55ce7c31e457ed08793b3532339f236b39a3/rootfs","created":"2021-08
-13T03:55:49.971149033Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"dc0376b435853ad1378172a515dd55ce7c31e457ed08793b3532339f236b39a3","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-functional-20210813035500-2022292_f1f9ef61d91c833fcaa2c35fa1b63835"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f7398811f5c1ab32324c181b9105e0f0caefe8e34034453b4f7b5a146d1d2f36","pid":1003,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f7398811f5c1ab32324c181b9105e0f0caefe8e34034453b4f7b5a146d1d2f36","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f7398811f5c1ab32324c181b9105e0f0caefe8e34034453b4f7b5a146d1d2f36/rootfs","created":"2021-08-13T03:55:50.005054565Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"f7398811f5c1ab32324c181b9105e0f0caefe8e34034453b4f7b5a146d1d2f36","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_
kube-scheduler-functional-20210813035500-2022292_427d0d0fb475ad17c6148a8340a035fa"},"owner":"root"}]
	I0813 03:57:46.249816 2052796 cri.go:113] list returned 16 containers
	I0813 03:57:46.249822 2052796 cri.go:116] container: {ID:081803f26118d4f158028edcd1a25da7ba0ef23ee75c924c3bc02b93830b02ac Status:running}
	I0813 03:57:46.249831 2052796 cri.go:118] skipping 081803f26118d4f158028edcd1a25da7ba0ef23ee75c924c3bc02b93830b02ac - not in ps
	I0813 03:57:46.249835 2052796 cri.go:116] container: {ID:26de2a043d4ba92e434cd3c2f372a85bf0127b256631befeb8d4dbdd429a82fc Status:running}
	I0813 03:57:46.249840 2052796 cri.go:122] skipping {26de2a043d4ba92e434cd3c2f372a85bf0127b256631befeb8d4dbdd429a82fc running}: state = "running", want "paused"
	I0813 03:57:46.249848 2052796 cri.go:116] container: {ID:2e77f564cd6a8a87291fa7a7780a7ab63b5a887e233148679ea40cce4a1487d9 Status:running}
	I0813 03:57:46.249853 2052796 cri.go:122] skipping {2e77f564cd6a8a87291fa7a7780a7ab63b5a887e233148679ea40cce4a1487d9 running}: state = "running", want "paused"
	I0813 03:57:46.249857 2052796 cri.go:116] container: {ID:46a378659540fb860f2db272374e894e396b9a59e4362ce2c2a32ef06150a584 Status:running}
	I0813 03:57:46.249862 2052796 cri.go:118] skipping 46a378659540fb860f2db272374e894e396b9a59e4362ce2c2a32ef06150a584 - not in ps
	I0813 03:57:46.249865 2052796 cri.go:116] container: {ID:4f742e964a23c86141e4b6ab853c09c7719ceead8cbcc014372fe419141bea42 Status:running}
	I0813 03:57:46.249870 2052796 cri.go:118] skipping 4f742e964a23c86141e4b6ab853c09c7719ceead8cbcc014372fe419141bea42 - not in ps
	I0813 03:57:46.249874 2052796 cri.go:116] container: {ID:584edf842ed29d81275517f795258fdce46c4608d37a80daa0622aea955c9e26 Status:running}
	I0813 03:57:46.249878 2052796 cri.go:122] skipping {584edf842ed29d81275517f795258fdce46c4608d37a80daa0622aea955c9e26 running}: state = "running", want "paused"
	I0813 03:57:46.249885 2052796 cri.go:116] container: {ID:5c910e4a92e8eb692a4276056f2c8c50a1ca29494bbac6881023d7d52b2e1386 Status:running}
	I0813 03:57:46.249890 2052796 cri.go:122] skipping {5c910e4a92e8eb692a4276056f2c8c50a1ca29494bbac6881023d7d52b2e1386 running}: state = "running", want "paused"
	I0813 03:57:46.249894 2052796 cri.go:116] container: {ID:87bdbbda825fdf2442f99d53b1daa35073722a57329f227fd972e8f5287fb679 Status:running}
	I0813 03:57:46.249899 2052796 cri.go:122] skipping {87bdbbda825fdf2442f99d53b1daa35073722a57329f227fd972e8f5287fb679 running}: state = "running", want "paused"
	I0813 03:57:46.249903 2052796 cri.go:116] container: {ID:8e5670fba2a2eb79521f72949fc599980deac4d472646f753211578ca6010bcd Status:running}
	I0813 03:57:46.249908 2052796 cri.go:122] skipping {8e5670fba2a2eb79521f72949fc599980deac4d472646f753211578ca6010bcd running}: state = "running", want "paused"
	I0813 03:57:46.249915 2052796 cri.go:116] container: {ID:93e8784b8c01bd693a0a843aae4677a92acd1be6930abbf08a4999917477aca8 Status:running}
	I0813 03:57:46.249920 2052796 cri.go:118] skipping 93e8784b8c01bd693a0a843aae4677a92acd1be6930abbf08a4999917477aca8 - not in ps
	I0813 03:57:46.249923 2052796 cri.go:116] container: {ID:b044867072c8a562fce8e5aabba2b81d9f24488637d60d13061c59439097a8bd Status:running}
	I0813 03:57:46.249927 2052796 cri.go:122] skipping {b044867072c8a562fce8e5aabba2b81d9f24488637d60d13061c59439097a8bd running}: state = "running", want "paused"
	I0813 03:57:46.249932 2052796 cri.go:116] container: {ID:bfedc8b92e9d94a2143d41d2ad9f3caf5c24da4da0cd7a7a610c554ca5000d69 Status:running}
	I0813 03:57:46.249936 2052796 cri.go:122] skipping {bfedc8b92e9d94a2143d41d2ad9f3caf5c24da4da0cd7a7a610c554ca5000d69 running}: state = "running", want "paused"
	I0813 03:57:46.249940 2052796 cri.go:116] container: {ID:ce09cc60a5088dcac1da7716e2751d0b149a6b8f0dea9f0849e403ce38e0c5c5 Status:running}
	I0813 03:57:46.249944 2052796 cri.go:118] skipping ce09cc60a5088dcac1da7716e2751d0b149a6b8f0dea9f0849e403ce38e0c5c5 - not in ps
	I0813 03:57:46.249948 2052796 cri.go:116] container: {ID:d20a1118af86256b6c6a6945620ab95c052af8aa6a824697d0b0a0a9b33732a7 Status:running}
	I0813 03:57:46.249952 2052796 cri.go:118] skipping d20a1118af86256b6c6a6945620ab95c052af8aa6a824697d0b0a0a9b33732a7 - not in ps
	I0813 03:57:46.249955 2052796 cri.go:116] container: {ID:dc0376b435853ad1378172a515dd55ce7c31e457ed08793b3532339f236b39a3 Status:running}
	I0813 03:57:46.249960 2052796 cri.go:118] skipping dc0376b435853ad1378172a515dd55ce7c31e457ed08793b3532339f236b39a3 - not in ps
	I0813 03:57:46.249963 2052796 cri.go:116] container: {ID:f7398811f5c1ab32324c181b9105e0f0caefe8e34034453b4f7b5a146d1d2f36 Status:running}
	I0813 03:57:46.249967 2052796 cri.go:118] skipping f7398811f5c1ab32324c181b9105e0f0caefe8e34034453b4f7b5a146d1d2f36 - not in ps
	I0813 03:57:46.250002 2052796 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 03:57:46.256148 2052796 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0813 03:57:46.256155 2052796 kubeadm.go:600] restartCluster start
	I0813 03:57:46.256189 2052796 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0813 03:57:46.261638 2052796 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0813 03:57:46.262424 2052796 kubeconfig.go:93] found "functional-20210813035500-2022292" server: "https://192.168.49.2:8441"
	I0813 03:57:46.264360 2052796 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0813 03:57:46.270338 2052796 kubeadm.go:568] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2021-08-13 03:55:31.057080782 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2021-08-13 03:57:45.944200211 +0000
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0813 03:57:46.270351 2052796 kubeadm.go:1032] stopping kube-system containers ...
	I0813 03:57:46.270359 2052796 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0813 03:57:46.270393 2052796 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 03:57:46.294104 2052796 cri.go:76] found id: "584edf842ed29d81275517f795258fdce46c4608d37a80daa0622aea955c9e26"
	I0813 03:57:46.294112 2052796 cri.go:76] found id: "26de2a043d4ba92e434cd3c2f372a85bf0127b256631befeb8d4dbdd429a82fc"
	I0813 03:57:46.294116 2052796 cri.go:76] found id: "87bdbbda825fdf2442f99d53b1daa35073722a57329f227fd972e8f5287fb679"
	I0813 03:57:46.294120 2052796 cri.go:76] found id: "2e77f564cd6a8a87291fa7a7780a7ab63b5a887e233148679ea40cce4a1487d9"
	I0813 03:57:46.294123 2052796 cri.go:76] found id: "5c910e4a92e8eb692a4276056f2c8c50a1ca29494bbac6881023d7d52b2e1386"
	I0813 03:57:46.294127 2052796 cri.go:76] found id: "b044867072c8a562fce8e5aabba2b81d9f24488637d60d13061c59439097a8bd"
	I0813 03:57:46.294131 2052796 cri.go:76] found id: "8e5670fba2a2eb79521f72949fc599980deac4d472646f753211578ca6010bcd"
	I0813 03:57:46.294134 2052796 cri.go:76] found id: "bfedc8b92e9d94a2143d41d2ad9f3caf5c24da4da0cd7a7a610c554ca5000d69"
	I0813 03:57:46.294138 2052796 cri.go:76] found id: ""
	I0813 03:57:46.294141 2052796 cri.go:221] Stopping containers: [584edf842ed29d81275517f795258fdce46c4608d37a80daa0622aea955c9e26 26de2a043d4ba92e434cd3c2f372a85bf0127b256631befeb8d4dbdd429a82fc 87bdbbda825fdf2442f99d53b1daa35073722a57329f227fd972e8f5287fb679 2e77f564cd6a8a87291fa7a7780a7ab63b5a887e233148679ea40cce4a1487d9 5c910e4a92e8eb692a4276056f2c8c50a1ca29494bbac6881023d7d52b2e1386 b044867072c8a562fce8e5aabba2b81d9f24488637d60d13061c59439097a8bd 8e5670fba2a2eb79521f72949fc599980deac4d472646f753211578ca6010bcd bfedc8b92e9d94a2143d41d2ad9f3caf5c24da4da0cd7a7a610c554ca5000d69]
	I0813 03:57:46.294173 2052796 ssh_runner.go:149] Run: which crictl
	I0813 03:57:46.296799 2052796 ssh_runner.go:149] Run: sudo /usr/bin/crictl stop 584edf842ed29d81275517f795258fdce46c4608d37a80daa0622aea955c9e26 26de2a043d4ba92e434cd3c2f372a85bf0127b256631befeb8d4dbdd429a82fc 87bdbbda825fdf2442f99d53b1daa35073722a57329f227fd972e8f5287fb679 2e77f564cd6a8a87291fa7a7780a7ab63b5a887e233148679ea40cce4a1487d9 5c910e4a92e8eb692a4276056f2c8c50a1ca29494bbac6881023d7d52b2e1386 b044867072c8a562fce8e5aabba2b81d9f24488637d60d13061c59439097a8bd 8e5670fba2a2eb79521f72949fc599980deac4d472646f753211578ca6010bcd bfedc8b92e9d94a2143d41d2ad9f3caf5c24da4da0cd7a7a610c554ca5000d69
	I0813 03:57:46.981694 2052796 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0813 03:57:47.076942 2052796 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 03:57:47.094085 2052796 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5643 Aug 13 03:55 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Aug 13 03:55 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2071 Aug 13 03:55 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Aug 13 03:55 /etc/kubernetes/scheduler.conf
	
	I0813 03:57:47.094130 2052796 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0813 03:57:47.103714 2052796 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0813 03:57:47.113109 2052796 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0813 03:57:47.125156 2052796 kubeadm.go:165] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0813 03:57:47.125190 2052796 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0813 03:57:47.146806 2052796 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0813 03:57:47.153016 2052796 kubeadm.go:165] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0813 03:57:47.153049 2052796 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0813 03:57:47.160273 2052796 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 03:57:47.169735 2052796 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0813 03:57:47.169743 2052796 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 03:57:47.254338 2052796 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 03:57:49.410827 2052796 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.156470791s)
	I0813 03:57:49.410842 2052796 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0813 03:57:49.597687 2052796 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 03:57:49.682817 2052796 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0813 03:57:49.753799 2052796 api_server.go:50] waiting for apiserver process to appear ...
	I0813 03:57:49.753843 2052796 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 03:57:50.265308 2052796 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 03:57:50.765523 2052796 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 03:57:51.265600 2052796 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 03:57:51.765756 2052796 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 03:57:52.264878 2052796 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 03:57:52.765065 2052796 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 03:57:53.265630 2052796 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 03:57:53.765795 2052796 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 03:57:54.264864 2052796 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 03:57:54.765486 2052796 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 03:57:55.265828 2052796 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 03:57:55.764855 2052796 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 03:57:56.265314 2052796 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 03:57:56.765576 2052796 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 03:57:57.265846 2052796 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 03:57:57.765306 2052796 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 03:57:58.265779 2052796 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 03:57:58.765511 2052796 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 03:57:59.265372 2052796 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 03:57:59.277996 2052796 api_server.go:70] duration metric: took 9.524205249s to wait for apiserver process to appear ...
	I0813 03:57:59.278006 2052796 api_server.go:86] waiting for apiserver healthz status ...
	I0813 03:57:59.278014 2052796 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0813 03:58:04.278884 2052796 api_server.go:255] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0813 03:58:04.779501 2052796 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0813 03:58:04.935456 2052796 api_server.go:265] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0813 03:58:04.935487 2052796 api_server.go:101] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0813 03:58:05.279889 2052796 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0813 03:58:05.287973 2052796 api_server.go:265] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 03:58:05.287986 2052796 api_server.go:101] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 03:58:05.779554 2052796 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0813 03:58:05.787472 2052796 api_server.go:265] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0813 03:58:05.787485 2052796 api_server.go:101] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0813 03:58:06.279220 2052796 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0813 03:58:06.287488 2052796 api_server.go:265] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0813 03:58:06.300374 2052796 api_server.go:139] control plane version: v1.21.3
	I0813 03:58:06.300383 2052796 api_server.go:129] duration metric: took 7.022373634s to wait for apiserver health ...
	I0813 03:58:06.300390 2052796 cni.go:93] Creating CNI manager for ""
	I0813 03:58:06.300396 2052796 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 03:58:06.302883 2052796 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0813 03:58:06.302937 2052796 ssh_runner.go:149] Run: stat /opt/cni/bin/portmap
	I0813 03:58:06.306125 2052796 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0813 03:58:06.306133 2052796 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0813 03:58:06.318392 2052796 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 03:58:06.609753 2052796 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 03:58:06.620915 2052796 system_pods.go:59] 8 kube-system pods found
	I0813 03:58:06.620934 2052796 system_pods.go:61] "coredns-558bd4d5db-tv6ld" [fdcc7b54-72d1-4a9e-bedd-687a4e6272d1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0813 03:58:06.620941 2052796 system_pods.go:61] "etcd-functional-20210813035500-2022292" [6af9543d-640b-4265-a8de-217f7ebcdded] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0813 03:58:06.620948 2052796 system_pods.go:61] "kindnet-bnjk7" [f0564b00-d495-418d-8eb3-be7440a56f15] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0813 03:58:06.620954 2052796 system_pods.go:61] "kube-apiserver-functional-20210813035500-2022292" [31304b93-8bc2-46bb-bf98-0062517d53d1] Pending
	I0813 03:58:06.620959 2052796 system_pods.go:61] "kube-controller-manager-functional-20210813035500-2022292" [e4e7e5f0-d268-4ce3-a314-857746807e62] Running
	I0813 03:58:06.620964 2052796 system_pods.go:61] "kube-proxy-wvcgv" [1f769dd9-aac5-4959-a135-64c9bf26148c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0813 03:58:06.620973 2052796 system_pods.go:61] "kube-scheduler-functional-20210813035500-2022292" [7b028502-8b84-4b81-8135-967fb938e31e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0813 03:58:06.620979 2052796 system_pods.go:61] "storage-provisioner" [c4795fe5-fc69-49f2-a14d-841404f48843] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0813 03:58:06.620983 2052796 system_pods.go:74] duration metric: took 11.223464ms to wait for pod list to return data ...
	I0813 03:58:06.620990 2052796 node_conditions.go:102] verifying NodePressure condition ...
	I0813 03:58:06.624053 2052796 node_conditions.go:122] node storage ephemeral capacity is 40474572Ki
	I0813 03:58:06.624064 2052796 node_conditions.go:123] node cpu capacity is 2
	I0813 03:58:06.624073 2052796 node_conditions.go:105] duration metric: took 3.079878ms to run NodePressure ...
	I0813 03:58:06.624086 2052796 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0813 03:58:06.932179 2052796 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0813 03:58:06.938668 2052796 kubeadm.go:746] kubelet initialised
	I0813 03:58:06.938675 2052796 kubeadm.go:747] duration metric: took 6.48553ms waiting for restarted kubelet to initialise ...
	I0813 03:58:06.938681 2052796 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 03:58:06.944064 2052796 pod_ready.go:78] waiting up to 4m0s for pod "coredns-558bd4d5db-tv6ld" in "kube-system" namespace to be "Ready" ...
	I0813 03:58:08.958474 2052796 pod_ready.go:102] pod "coredns-558bd4d5db-tv6ld" in "kube-system" namespace has status "Ready":"False"
	I0813 03:58:10.958531 2052796 pod_ready.go:102] pod "coredns-558bd4d5db-tv6ld" in "kube-system" namespace has status "Ready":"False"
	I0813 03:58:13.457965 2052796 pod_ready.go:92] pod "coredns-558bd4d5db-tv6ld" in "kube-system" namespace has status "Ready":"True"
	I0813 03:58:13.457972 2052796 pod_ready.go:81] duration metric: took 6.513896216s waiting for pod "coredns-558bd4d5db-tv6ld" in "kube-system" namespace to be "Ready" ...
	I0813 03:58:13.457981 2052796 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-20210813035500-2022292" in "kube-system" namespace to be "Ready" ...
	I0813 03:58:15.465612 2052796 pod_ready.go:102] pod "etcd-functional-20210813035500-2022292" in "kube-system" namespace has status "Ready":"False"
	I0813 03:58:17.466308 2052796 pod_ready.go:92] pod "etcd-functional-20210813035500-2022292" in "kube-system" namespace has status "Ready":"True"
	I0813 03:58:17.466315 2052796 pod_ready.go:81] duration metric: took 4.008328076s waiting for pod "etcd-functional-20210813035500-2022292" in "kube-system" namespace to be "Ready" ...
	I0813 03:58:17.466327 2052796 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-20210813035500-2022292" in "kube-system" namespace to be "Ready" ...
	I0813 03:58:17.469764 2052796 pod_ready.go:92] pod "kube-apiserver-functional-20210813035500-2022292" in "kube-system" namespace has status "Ready":"True"
	I0813 03:58:17.469769 2052796 pod_ready.go:81] duration metric: took 3.436626ms waiting for pod "kube-apiserver-functional-20210813035500-2022292" in "kube-system" namespace to be "Ready" ...
	I0813 03:58:17.469777 2052796 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-20210813035500-2022292" in "kube-system" namespace to be "Ready" ...
	I0813 03:58:17.473300 2052796 pod_ready.go:92] pod "kube-controller-manager-functional-20210813035500-2022292" in "kube-system" namespace has status "Ready":"True"
	I0813 03:58:17.473305 2052796 pod_ready.go:81] duration metric: took 3.522286ms waiting for pod "kube-controller-manager-functional-20210813035500-2022292" in "kube-system" namespace to be "Ready" ...
	I0813 03:58:17.473312 2052796 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-wvcgv" in "kube-system" namespace to be "Ready" ...
	I0813 03:58:17.477101 2052796 pod_ready.go:92] pod "kube-proxy-wvcgv" in "kube-system" namespace has status "Ready":"True"
	I0813 03:58:17.477106 2052796 pod_ready.go:81] duration metric: took 3.789261ms waiting for pod "kube-proxy-wvcgv" in "kube-system" namespace to be "Ready" ...
	I0813 03:58:17.477114 2052796 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-20210813035500-2022292" in "kube-system" namespace to be "Ready" ...
	I0813 03:58:17.480356 2052796 pod_ready.go:92] pod "kube-scheduler-functional-20210813035500-2022292" in "kube-system" namespace has status "Ready":"True"
	I0813 03:58:17.480361 2052796 pod_ready.go:81] duration metric: took 3.241519ms waiting for pod "kube-scheduler-functional-20210813035500-2022292" in "kube-system" namespace to be "Ready" ...
	I0813 03:58:17.480369 2052796 pod_ready.go:38] duration metric: took 10.541678554s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 03:58:17.480385 2052796 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 03:58:17.490470 2052796 ops.go:34] apiserver oom_adj: -16
	I0813 03:58:17.490477 2052796 kubeadm.go:604] restartCluster took 31.234317808s
	I0813 03:58:17.490482 2052796 kubeadm.go:392] StartCluster complete in 31.305296961s
	I0813 03:58:17.490494 2052796 settings.go:142] acquiring lock: {Name:mke0b9bf6059169e73bfde24fe8e8162c3ec0654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 03:58:17.490576 2052796 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 03:58:17.491227 2052796 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig: {Name:mk6797826f33680e9cda7cd38a7adfcabda9681c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 03:58:17.495198 2052796 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "functional-20210813035500-2022292" rescaled to 1
	I0813 03:58:17.495251 2052796 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 03:58:17.495252 2052796 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 03:58:17.497305 2052796 out.go:177] * Verifying Kubernetes components...
	I0813 03:58:17.495776 2052796 addons.go:342] enableAddons start: toEnable=map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false], additional=[]
	I0813 03:58:17.497389 2052796 addons.go:59] Setting storage-provisioner=true in profile "functional-20210813035500-2022292"
	I0813 03:58:17.497400 2052796 addons.go:135] Setting addon storage-provisioner=true in "functional-20210813035500-2022292"
	W0813 03:58:17.497405 2052796 addons.go:147] addon storage-provisioner should already be in state true
	I0813 03:58:17.497424 2052796 host.go:66] Checking if "functional-20210813035500-2022292" exists ...
	I0813 03:58:17.497871 2052796 cli_runner.go:115] Run: docker container inspect functional-20210813035500-2022292 --format={{.State.Status}}
	I0813 03:58:17.497974 2052796 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 03:58:17.498024 2052796 addons.go:59] Setting default-storageclass=true in profile "functional-20210813035500-2022292"
	I0813 03:58:17.498034 2052796 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-20210813035500-2022292"
	I0813 03:58:17.498284 2052796 cli_runner.go:115] Run: docker container inspect functional-20210813035500-2022292 --format={{.State.Status}}
	I0813 03:58:17.577501 2052796 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 03:58:17.577655 2052796 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 03:58:17.577664 2052796 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 03:58:17.577716 2052796 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210813035500-2022292
	I0813 03:58:17.594382 2052796 addons.go:135] Setting addon default-storageclass=true in "functional-20210813035500-2022292"
	W0813 03:58:17.594390 2052796 addons.go:147] addon default-storageclass should already be in state true
	I0813 03:58:17.594414 2052796 host.go:66] Checking if "functional-20210813035500-2022292" exists ...
	I0813 03:58:17.594855 2052796 cli_runner.go:115] Run: docker container inspect functional-20210813035500-2022292 --format={{.State.Status}}
	I0813 03:58:17.634657 2052796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50813 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/functional-20210813035500-2022292/id_rsa Username:docker}
	I0813 03:58:17.663148 2052796 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 03:58:17.663160 2052796 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 03:58:17.663212 2052796 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210813035500-2022292
	I0813 03:58:17.727357 2052796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50813 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/functional-20210813035500-2022292/id_rsa Username:docker}
	I0813 03:58:17.783816 2052796 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 03:58:17.785779 2052796 start.go:716] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0813 03:58:17.785793 2052796 node_ready.go:35] waiting up to 6m0s for node "functional-20210813035500-2022292" to be "Ready" ...
	I0813 03:58:17.788705 2052796 node_ready.go:49] node "functional-20210813035500-2022292" has status "Ready":"True"
	I0813 03:58:17.788711 2052796 node_ready.go:38] duration metric: took 2.906039ms waiting for node "functional-20210813035500-2022292" to be "Ready" ...
	I0813 03:58:17.788719 2052796 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 03:58:17.844273 2052796 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 03:58:17.869680 2052796 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-tv6ld" in "kube-system" namespace to be "Ready" ...
	I0813 03:58:18.146937 2052796 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0813 03:58:18.146960 2052796 addons.go:344] enableAddons completed in 651.190358ms
	I0813 03:58:18.269434 2052796 pod_ready.go:92] pod "coredns-558bd4d5db-tv6ld" in "kube-system" namespace has status "Ready":"True"
	I0813 03:58:18.269441 2052796 pod_ready.go:81] duration metric: took 399.748309ms waiting for pod "coredns-558bd4d5db-tv6ld" in "kube-system" namespace to be "Ready" ...
	I0813 03:58:18.269450 2052796 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-20210813035500-2022292" in "kube-system" namespace to be "Ready" ...
	I0813 03:58:18.665345 2052796 pod_ready.go:92] pod "etcd-functional-20210813035500-2022292" in "kube-system" namespace has status "Ready":"True"
	I0813 03:58:18.665353 2052796 pod_ready.go:81] duration metric: took 395.895829ms waiting for pod "etcd-functional-20210813035500-2022292" in "kube-system" namespace to be "Ready" ...
	I0813 03:58:18.665364 2052796 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-20210813035500-2022292" in "kube-system" namespace to be "Ready" ...
	I0813 03:58:19.065550 2052796 pod_ready.go:92] pod "kube-apiserver-functional-20210813035500-2022292" in "kube-system" namespace has status "Ready":"True"
	I0813 03:58:19.065557 2052796 pod_ready.go:81] duration metric: took 400.186065ms waiting for pod "kube-apiserver-functional-20210813035500-2022292" in "kube-system" namespace to be "Ready" ...
	I0813 03:58:19.065569 2052796 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-20210813035500-2022292" in "kube-system" namespace to be "Ready" ...
	I0813 03:58:19.465364 2052796 pod_ready.go:92] pod "kube-controller-manager-functional-20210813035500-2022292" in "kube-system" namespace has status "Ready":"True"
	I0813 03:58:19.465371 2052796 pod_ready.go:81] duration metric: took 399.795567ms waiting for pod "kube-controller-manager-functional-20210813035500-2022292" in "kube-system" namespace to be "Ready" ...
	I0813 03:58:19.465380 2052796 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wvcgv" in "kube-system" namespace to be "Ready" ...
	I0813 03:58:19.865516 2052796 pod_ready.go:92] pod "kube-proxy-wvcgv" in "kube-system" namespace has status "Ready":"True"
	I0813 03:58:19.865523 2052796 pod_ready.go:81] duration metric: took 400.136732ms waiting for pod "kube-proxy-wvcgv" in "kube-system" namespace to be "Ready" ...
	I0813 03:58:19.865533 2052796 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-20210813035500-2022292" in "kube-system" namespace to be "Ready" ...
	I0813 03:58:20.264949 2052796 pod_ready.go:92] pod "kube-scheduler-functional-20210813035500-2022292" in "kube-system" namespace has status "Ready":"True"
	I0813 03:58:20.264956 2052796 pod_ready.go:81] duration metric: took 399.416317ms waiting for pod "kube-scheduler-functional-20210813035500-2022292" in "kube-system" namespace to be "Ready" ...
	I0813 03:58:20.264966 2052796 pod_ready.go:38] duration metric: took 2.476236844s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 03:58:20.264981 2052796 api_server.go:50] waiting for apiserver process to appear ...
	I0813 03:58:20.265028 2052796 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 03:58:20.277791 2052796 api_server.go:70] duration metric: took 2.782518837s to wait for apiserver process to appear ...
	I0813 03:58:20.277800 2052796 api_server.go:86] waiting for apiserver healthz status ...
	I0813 03:58:20.277808 2052796 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0813 03:58:20.285922 2052796 api_server.go:265] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0813 03:58:20.286667 2052796 api_server.go:139] control plane version: v1.21.3
	I0813 03:58:20.286676 2052796 api_server.go:129] duration metric: took 8.871749ms to wait for apiserver health ...
	I0813 03:58:20.286682 2052796 system_pods.go:43] waiting for kube-system pods to appear ...
	I0813 03:58:20.468354 2052796 system_pods.go:59] 8 kube-system pods found
	I0813 03:58:20.468366 2052796 system_pods.go:61] "coredns-558bd4d5db-tv6ld" [fdcc7b54-72d1-4a9e-bedd-687a4e6272d1] Running
	I0813 03:58:20.468370 2052796 system_pods.go:61] "etcd-functional-20210813035500-2022292" [6af9543d-640b-4265-a8de-217f7ebcdded] Running
	I0813 03:58:20.468374 2052796 system_pods.go:61] "kindnet-bnjk7" [f0564b00-d495-418d-8eb3-be7440a56f15] Running
	I0813 03:58:20.468379 2052796 system_pods.go:61] "kube-apiserver-functional-20210813035500-2022292" [31304b93-8bc2-46bb-bf98-0062517d53d1] Running
	I0813 03:58:20.468383 2052796 system_pods.go:61] "kube-controller-manager-functional-20210813035500-2022292" [e4e7e5f0-d268-4ce3-a314-857746807e62] Running
	I0813 03:58:20.468386 2052796 system_pods.go:61] "kube-proxy-wvcgv" [1f769dd9-aac5-4959-a135-64c9bf26148c] Running
	I0813 03:58:20.468391 2052796 system_pods.go:61] "kube-scheduler-functional-20210813035500-2022292" [7b028502-8b84-4b81-8135-967fb938e31e] Running
	I0813 03:58:20.468395 2052796 system_pods.go:61] "storage-provisioner" [c4795fe5-fc69-49f2-a14d-841404f48843] Running
	I0813 03:58:20.468398 2052796 system_pods.go:74] duration metric: took 181.71266ms to wait for pod list to return data ...
	I0813 03:58:20.468405 2052796 default_sa.go:34] waiting for default service account to be created ...
	I0813 03:58:20.665709 2052796 default_sa.go:45] found service account: "default"
	I0813 03:58:20.665718 2052796 default_sa.go:55] duration metric: took 197.30782ms for default service account to be created ...
	I0813 03:58:20.665725 2052796 system_pods.go:116] waiting for k8s-apps to be running ...
	I0813 03:58:20.868870 2052796 system_pods.go:86] 8 kube-system pods found
	I0813 03:58:20.868882 2052796 system_pods.go:89] "coredns-558bd4d5db-tv6ld" [fdcc7b54-72d1-4a9e-bedd-687a4e6272d1] Running
	I0813 03:58:20.868888 2052796 system_pods.go:89] "etcd-functional-20210813035500-2022292" [6af9543d-640b-4265-a8de-217f7ebcdded] Running
	I0813 03:58:20.868892 2052796 system_pods.go:89] "kindnet-bnjk7" [f0564b00-d495-418d-8eb3-be7440a56f15] Running
	I0813 03:58:20.868896 2052796 system_pods.go:89] "kube-apiserver-functional-20210813035500-2022292" [31304b93-8bc2-46bb-bf98-0062517d53d1] Running
	I0813 03:58:20.868901 2052796 system_pods.go:89] "kube-controller-manager-functional-20210813035500-2022292" [e4e7e5f0-d268-4ce3-a314-857746807e62] Running
	I0813 03:58:20.868906 2052796 system_pods.go:89] "kube-proxy-wvcgv" [1f769dd9-aac5-4959-a135-64c9bf26148c] Running
	I0813 03:58:20.868912 2052796 system_pods.go:89] "kube-scheduler-functional-20210813035500-2022292" [7b028502-8b84-4b81-8135-967fb938e31e] Running
	I0813 03:58:20.868916 2052796 system_pods.go:89] "storage-provisioner" [c4795fe5-fc69-49f2-a14d-841404f48843] Running
	I0813 03:58:20.868921 2052796 system_pods.go:126] duration metric: took 203.192874ms to wait for k8s-apps to be running ...
	I0813 03:58:20.868928 2052796 system_svc.go:44] waiting for kubelet service to be running ....
	I0813 03:58:20.868974 2052796 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 03:58:20.878154 2052796 system_svc.go:56] duration metric: took 9.222859ms WaitForService to wait for kubelet.
	I0813 03:58:20.878165 2052796 kubeadm.go:547] duration metric: took 3.382894885s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0813 03:58:20.878185 2052796 node_conditions.go:102] verifying NodePressure condition ...
	I0813 03:58:21.065057 2052796 node_conditions.go:122] node storage ephemeral capacity is 40474572Ki
	I0813 03:58:21.065067 2052796 node_conditions.go:123] node cpu capacity is 2
	I0813 03:58:21.065077 2052796 node_conditions.go:105] duration metric: took 186.887972ms to run NodePressure ...
	I0813 03:58:21.065086 2052796 start.go:231] waiting for startup goroutines ...
	I0813 03:58:21.116615 2052796 start.go:462] kubectl: 1.21.3, cluster: 1.21.3 (minor skew: 0)
	I0813 03:58:21.118886 2052796 out.go:177] * Done! kubectl is now configured to use "functional-20210813035500-2022292" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	c6e8f3de56f2b       72565bf5bbedf       3 minutes ago       Running             echoserver-arm            0                   5bf94361ba798
	548ea0f62d552       4ea38350a1beb       3 minutes ago       Running             kube-proxy                2                   46a378659540f
	c5317a7a8c257       1a1f05a2cd7c2       3 minutes ago       Running             coredns                   2                   ce09cc60a5088
	7b6115bf914d6       f37b7c809e5dc       3 minutes ago       Running             kindnet-cni               2                   d20a1118af862
	91a52594035a8       ba04bb24b9575       3 minutes ago       Running             storage-provisioner       2                   4f742e964a23c
	8887c9fc7482c       44a6d50ef170d       3 minutes ago       Running             kube-apiserver            0                   d38b7c0d5c094
	3beb345605406       cb310ff289d79       3 minutes ago       Running             kube-controller-manager   1                   dc0376b435853
	81acc694a9dea       05b738aa1bc63       3 minutes ago       Running             etcd                      1                   081803f26118d
	3a1a0edf6a5b1       ba04bb24b9575       4 minutes ago       Exited              storage-provisioner       1                   4f742e964a23c
	e48f354155aba       1a1f05a2cd7c2       4 minutes ago       Exited              coredns                   1                   ce09cc60a5088
	d2eac7847201a       f37b7c809e5dc       4 minutes ago       Exited              kindnet-cni               1                   d20a1118af862
	a88ce6123b79a       4ea38350a1beb       4 minutes ago       Exited              kube-proxy                1                   46a378659540f
	e104a7cbdd9d6       31a3b96cefc1e       4 minutes ago       Running             kube-scheduler            1                   f7398811f5c1a
	5c910e4a92e8e       31a3b96cefc1e       5 minutes ago       Exited              kube-scheduler            0                   f7398811f5c1a
	b044867072c8a       cb310ff289d79       5 minutes ago       Exited              kube-controller-manager   0                   dc0376b435853
	bfedc8b92e9d9       05b738aa1bc63       5 minutes ago       Exited              etcd                      0                   081803f26118d
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-08-13 03:55:02 UTC, end at Fri 2021-08-13 04:01:48 UTC. --
	Aug 13 03:58:47 functional-20210813035500-2022292 containerd[3155]: time="2021-08-13T03:58:47.125447930Z" level=error msg="PullImage \"nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:93be99beb7ac44e27734270778f5a32b7484d1acadbac0a1a33ab100c8b6d5be: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 13 03:58:47 functional-20210813035500-2022292 containerd[3155]: time="2021-08-13T03:58:47.177674031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:sp-pod,Uid:9ba653dc-6c2f-4098-a0c5-51f489d17c03,Namespace:default,Attempt:0,} returns sandbox id \"7ee857c340d9f1fe8115e6b0a4aea30d382e28e7ba2815b9c340bed01e74a2e1\""
	Aug 13 03:58:47 functional-20210813035500-2022292 containerd[3155]: time="2021-08-13T03:58:47.179096611Z" level=info msg="PullImage \"nginx:latest\""
	Aug 13 03:58:48 functional-20210813035500-2022292 containerd[3155]: time="2021-08-13T03:58:48.069539242Z" level=error msg="PullImage \"nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8f335768880da6baf72b70c701002b45f4932acae8d574dedfddaf967fc3ac90: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 13 03:58:58 functional-20210813035500-2022292 containerd[3155]: time="2021-08-13T03:58:58.098401370Z" level=info msg="RemoveContainer for \"8e5670fba2a2eb79521f72949fc599980deac4d472646f753211578ca6010bcd\""
	Aug 13 03:58:58 functional-20210813035500-2022292 containerd[3155]: time="2021-08-13T03:58:58.103682438Z" level=info msg="RemoveContainer for \"8e5670fba2a2eb79521f72949fc599980deac4d472646f753211578ca6010bcd\" returns successfully"
	Aug 13 03:58:58 functional-20210813035500-2022292 containerd[3155]: time="2021-08-13T03:58:58.104752948Z" level=info msg="StopPodSandbox for \"93e8784b8c01bd693a0a843aae4677a92acd1be6930abbf08a4999917477aca8\""
	Aug 13 03:58:58 functional-20210813035500-2022292 containerd[3155]: time="2021-08-13T03:58:58.104840808Z" level=info msg="TearDown network for sandbox \"93e8784b8c01bd693a0a843aae4677a92acd1be6930abbf08a4999917477aca8\" successfully"
	Aug 13 03:58:58 functional-20210813035500-2022292 containerd[3155]: time="2021-08-13T03:58:58.104852385Z" level=info msg="StopPodSandbox for \"93e8784b8c01bd693a0a843aae4677a92acd1be6930abbf08a4999917477aca8\" returns successfully"
	Aug 13 03:58:58 functional-20210813035500-2022292 containerd[3155]: time="2021-08-13T03:58:58.105072042Z" level=info msg="RemovePodSandbox for \"93e8784b8c01bd693a0a843aae4677a92acd1be6930abbf08a4999917477aca8\""
	Aug 13 03:58:58 functional-20210813035500-2022292 containerd[3155]: time="2021-08-13T03:58:58.110376462Z" level=info msg="RemovePodSandbox \"93e8784b8c01bd693a0a843aae4677a92acd1be6930abbf08a4999917477aca8\" returns successfully"
	Aug 13 03:59:01 functional-20210813035500-2022292 containerd[3155]: time="2021-08-13T03:59:01.045390677Z" level=info msg="PullImage \"nginx:latest\""
	Aug 13 03:59:01 functional-20210813035500-2022292 containerd[3155]: time="2021-08-13T03:59:01.930369561Z" level=error msg="PullImage \"nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8f335768880da6baf72b70c701002b45f4932acae8d574dedfddaf967fc3ac90: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 13 03:59:12 functional-20210813035500-2022292 containerd[3155]: time="2021-08-13T03:59:12.044873967Z" level=info msg="PullImage \"nginx:alpine\""
	Aug 13 03:59:12 functional-20210813035500-2022292 containerd[3155]: time="2021-08-13T03:59:12.925552767Z" level=error msg="PullImage \"nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 13 03:59:24 functional-20210813035500-2022292 containerd[3155]: time="2021-08-13T03:59:24.045460707Z" level=info msg="PullImage \"nginx:latest\""
	Aug 13 03:59:24 functional-20210813035500-2022292 containerd[3155]: time="2021-08-13T03:59:24.960117524Z" level=error msg="PullImage \"nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8f335768880da6baf72b70c701002b45f4932acae8d574dedfddaf967fc3ac90: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 13 04:00:02 functional-20210813035500-2022292 containerd[3155]: time="2021-08-13T04:00:02.044817541Z" level=info msg="PullImage \"nginx:alpine\""
	Aug 13 04:00:03 functional-20210813035500-2022292 containerd[3155]: time="2021-08-13T04:00:03.100248231Z" level=error msg="PullImage \"nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 13 04:00:06 functional-20210813035500-2022292 containerd[3155]: time="2021-08-13T04:00:06.044657378Z" level=info msg="PullImage \"nginx:latest\""
	Aug 13 04:00:07 functional-20210813035500-2022292 containerd[3155]: time="2021-08-13T04:00:07.028347858Z" level=error msg="PullImage \"nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8f335768880da6baf72b70c701002b45f4932acae8d574dedfddaf967fc3ac90: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 13 04:01:34 functional-20210813035500-2022292 containerd[3155]: time="2021-08-13T04:01:34.044954460Z" level=info msg="PullImage \"nginx:alpine\""
	Aug 13 04:01:35 functional-20210813035500-2022292 containerd[3155]: time="2021-08-13T04:01:35.101152641Z" level=error msg="PullImage \"nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:93be99beb7ac44e27734270778f5a32b7484d1acadbac0a1a33ab100c8b6d5be: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Aug 13 04:01:37 functional-20210813035500-2022292 containerd[3155]: time="2021-08-13T04:01:37.044960394Z" level=info msg="PullImage \"nginx:latest\""
	Aug 13 04:01:37 functional-20210813035500-2022292 containerd[3155]: time="2021-08-13T04:01:37.958862604Z" level=error msg="PullImage \"nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8f335768880da6baf72b70c701002b45f4932acae8d574dedfddaf967fc3ac90: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	
	* 
	* ==> coredns [c5317a7a8c257df6f926650247f57e4277e50df9039abd53298c6190e2c3c020] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031
	CoreDNS-1.8.0
	linux/arm64, go1.15.3, 054c9ae
	
	* 
	* ==> coredns [e48f354155aba1ae9b78d71a51f0808c24ef8db1ad95329d264e5dc526abe59c] <==
	* 
	* 
	* ==> describe nodes <==
	* Name:               functional-20210813035500-2022292
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-20210813035500-2022292
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dc1c3ca26e9449ce488a773126b8450402c94a19
	                    minikube.k8s.io/name=functional-20210813035500-2022292
	                    minikube.k8s.io/updated_at=2021_08_13T03_55_59_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 03:55:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-20210813035500-2022292
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Aug 2021 04:01:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 03:59:05 +0000   Fri, 13 Aug 2021 03:55:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 03:59:05 +0000   Fri, 13 Aug 2021 03:55:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 03:59:05 +0000   Fri, 13 Aug 2021 03:55:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Aug 2021 03:59:05 +0000   Fri, 13 Aug 2021 03:56:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-20210813035500-2022292
	Capacity:
	  cpu:                2
	  ephemeral-storage:  40474572Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033460Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  40474572Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8033460Ki
	  pods:               110
	System Info:
	  Machine ID:                 80c525a0c99c4bf099c0cbf9c365b032
	  System UUID:                bf0e373e-c80f-4cb8-9080-688507ed7966
	  Boot ID:                    0b91f2d0-31de-4b03-9973-67e3d0024ffb
	  Kernel Version:             5.8.0-1041-aws
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.4.6
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-6d98884d59-8txch                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m19s
	  default                     nginx-svc                                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m17s
	  default                     sp-pod                                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m2s
	  kube-system                 coredns-558bd4d5db-tv6ld                                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     5m35s
	  kube-system                 etcd-functional-20210813035500-2022292                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5m41s
	  kube-system                 kindnet-bnjk7                                                100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m35s
	  kube-system                 kube-apiserver-functional-20210813035500-2022292             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	  kube-system                 kube-controller-manager-functional-20210813035500-2022292    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m41s
	  kube-system                 kube-proxy-wvcgv                                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m35s
	  kube-system                 kube-scheduler-functional-20210813035500-2022292             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m41s
	  kube-system                 storage-provisioner                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 5m59s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m59s (x5 over 5m59s)  kubelet     Node functional-20210813035500-2022292 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m59s (x5 over 5m59s)  kubelet     Node functional-20210813035500-2022292 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m59s (x4 over 5m59s)  kubelet     Node functional-20210813035500-2022292 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m59s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m41s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m41s                  kubelet     Node functional-20210813035500-2022292 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m41s                  kubelet     Node functional-20210813035500-2022292 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m41s                  kubelet     Node functional-20210813035500-2022292 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m41s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m34s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                4m51s                  kubelet     Node functional-20210813035500-2022292 status is now: NodeReady
	  Normal  Starting                 3m50s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m50s (x8 over 3m50s)  kubelet     Node functional-20210813035500-2022292 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m50s (x8 over 3m50s)  kubelet     Node functional-20210813035500-2022292 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m50s (x7 over 3m50s)  kubelet     Node functional-20210813035500-2022292 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m50s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 3m41s                  kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Aug13 02:55] systemd-journald[174]: Failed to send stream file descriptor to service manager: Connection refused
	
	* 
	* ==> etcd [81acc694a9deaae45991ddea6888f1da12d618aaa69d5fa7e516d773644c4154] <==
	* 2021-08-13 03:58:00.297877 I | embed: serving client requests on 192.168.49.2:2379
	2021-08-13 03:58:00.298056 I | embed: ready to serve client requests
	2021-08-13 03:58:00.299399 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-13 03:58:17.206822 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:58:21.657102 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:58:31.654706 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:58:41.652388 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:58:51.651963 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:59:01.651773 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:59:11.652525 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:59:21.652853 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:59:31.652016 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:59:41.652419 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:59:51.652643 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 04:00:01.652294 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 04:00:11.652309 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 04:00:21.652499 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 04:00:31.652721 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 04:00:41.651665 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 04:00:51.651891 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 04:01:01.652612 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 04:01:11.652380 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 04:01:21.651840 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 04:01:31.652713 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 04:01:41.652285 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> etcd [bfedc8b92e9d94a2143d41d2ad9f3caf5c24da4da0cd7a7a610c554ca5000d69] <==
	* 2021-08-13 03:55:50.222858 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-08-13 03:55:50.222917 I | embed: listening for peers on 192.168.49.2:2380
	raft2021/08/13 03:55:50 INFO: aec36adc501070cc is starting a new election at term 1
	raft2021/08/13 03:55:50 INFO: aec36adc501070cc became candidate at term 2
	raft2021/08/13 03:55:50 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2021/08/13 03:55:50 INFO: aec36adc501070cc became leader at term 2
	raft2021/08/13 03:55:50 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2021-08-13 03:55:50.900507 I | etcdserver: published {Name:functional-20210813035500-2022292 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2021-08-13 03:55:50.900634 I | embed: ready to serve client requests
	2021-08-13 03:55:50.900825 I | embed: ready to serve client requests
	2021-08-13 03:55:50.902141 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-13 03:55:50.916429 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-13 03:55:50.921387 I | embed: serving client requests on 192.168.49.2:2379
	2021-08-13 03:55:50.924384 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-13 03:55:50.960402 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-13 03:56:16.463187 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:56:25.805450 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:56:35.805907 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:56:45.805817 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:56:55.805377 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:57:05.805263 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:57:15.806337 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:57:25.805486 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:57:35.806133 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-13 03:57:45.806679 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  04:01:48 up 13:44,  0 users,  load average: 0.17, 0.56, 0.86
	Linux functional-20210813035500-2022292 5.8.0-1041-aws #43~20.04.1-Ubuntu SMP Thu Jul 15 11:03:27 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [8887c9fc7482ca59923c170fe1c611885de310dc147783a5f86186434eab7bdd] <==
	* I0813 03:58:06.922537       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0813 03:58:18.602304       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0813 03:58:18.826757       1 controller.go:611] quota admission added evaluator for: endpoints
	I0813 03:58:29.557788       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0813 03:58:32.466905       1 client.go:360] parsed scheme: "passthrough"
	I0813 03:58:32.466951       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 03:58:32.466964       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 03:59:03.720872       1 client.go:360] parsed scheme: "passthrough"
	I0813 03:59:03.720909       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 03:59:03.721003       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 03:59:38.717755       1 client.go:360] parsed scheme: "passthrough"
	I0813 03:59:38.717794       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 03:59:38.717803       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 04:00:14.194373       1 client.go:360] parsed scheme: "passthrough"
	I0813 04:00:14.194413       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 04:00:14.194421       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 04:00:44.318037       1 client.go:360] parsed scheme: "passthrough"
	I0813 04:00:44.318075       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 04:00:44.318083       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 04:01:14.717016       1 client.go:360] parsed scheme: "passthrough"
	I0813 04:01:14.717057       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 04:01:14.717065       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0813 04:01:45.614430       1 client.go:360] parsed scheme: "passthrough"
	I0813 04:01:45.614478       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0813 04:01:45.614486       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-controller-manager [3beb345605406a5ec1bbc993c6e35d28debcb6f307183664014db9ffcadc43cf] <==
	* I0813 03:58:18.559939       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
	I0813 03:58:18.559948       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
	I0813 03:58:18.559958       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
	I0813 03:58:18.559981       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
	I0813 03:58:18.574170       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0813 03:58:18.574266       1 shared_informer.go:247] Caches are synced for stateful set 
	I0813 03:58:18.574304       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0813 03:58:18.575375       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0813 03:58:18.579338       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0813 03:58:18.591571       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0813 03:58:18.601614       1 shared_informer.go:247] Caches are synced for namespace 
	I0813 03:58:18.608392       1 shared_informer.go:247] Caches are synced for deployment 
	I0813 03:58:18.674338       1 shared_informer.go:247] Caches are synced for attach detach 
	I0813 03:58:18.723304       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	I0813 03:58:18.723540       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0813 03:58:18.778179       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 03:58:18.814453       1 shared_informer.go:247] Caches are synced for endpoint 
	I0813 03:58:18.847402       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 03:58:18.856644       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0813 03:58:19.320268       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 03:58:19.324482       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 03:58:19.324501       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0813 03:58:29.564157       1 event.go:291] "Event occurred" object="default/hello-node" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-6d98884d59 to 1"
	I0813 03:58:29.624689       1 event.go:291] "Event occurred" object="default/hello-node-6d98884d59" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-6d98884d59-8txch"
	I0813 03:58:46.408921       1 event.go:291] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	
	* 
	* ==> kube-controller-manager [b044867072c8a562fce8e5aabba2b81d9f24488637d60d13061c59439097a8bd] <==
	* I0813 03:56:12.521996       1 event.go:291] "Event occurred" object="kube-system/kube-scheduler-functional-20210813035500-2022292" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0813 03:56:12.523702       1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager-functional-20210813035500-2022292" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0813 03:56:12.524434       1 shared_informer.go:247] Caches are synced for stateful set 
	I0813 03:56:12.528762       1 shared_informer.go:247] Caches are synced for namespace 
	I0813 03:56:12.533580       1 shared_informer.go:247] Caches are synced for deployment 
	I0813 03:56:12.556624       1 shared_informer.go:247] Caches are synced for attach detach 
	I0813 03:56:12.605917       1 shared_informer.go:247] Caches are synced for disruption 
	I0813 03:56:12.606013       1 disruption.go:371] Sending events to api server.
	I0813 03:56:12.700839       1 shared_informer.go:247] Caches are synced for job 
	I0813 03:56:12.709153       1 shared_informer.go:247] Caches are synced for TTL after finished 
	I0813 03:56:12.720440       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 03:56:12.723728       1 shared_informer.go:247] Caches are synced for resource quota 
	I0813 03:56:12.756361       1 shared_informer.go:247] Caches are synced for cronjob 
	I0813 03:56:13.070109       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-bnjk7"
	I0813 03:56:13.074400       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-wvcgv"
	E0813 03:56:13.101437       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"08c7e242-97ab-4105-9ebc-8c4bc55a668c", ResourceVersion:"280", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63764423759, loc:(*time.Location)(0x6704c20)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"kindest/kindnetd:v20210326-1e038dc5\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists
\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.mk\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x40019bb5a8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x40019bb5c0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001a0f120), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"k
indnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40019bb5d8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.F
CVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40019bb5f0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolum
eSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40019bb608), EmptyDir:(*v1.Em
ptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxV
olume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"kindest/kindnetd:v20210326-1e038dc5", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001a0f140)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001a0f180)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infD
ecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), L
ifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001b31a40), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001b8c558), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000645a40), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.Ho
stAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4001b7e5e0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001b8c5a0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0813 03:56:13.165626       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 03:56:13.204913       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0813 03:56:13.204933       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0813 03:56:13.415063       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-558bd4d5db to 2"
	I0813 03:56:13.435241       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-558bd4d5db to 1"
	I0813 03:56:13.515708       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-v4cvh"
	I0813 03:56:13.524693       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-558bd4d5db-tv6ld"
	I0813 03:56:13.547707       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-558bd4d5db-v4cvh"
	I0813 03:56:57.511119       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	
	* 
	* ==> kube-proxy [548ea0f62d55267a04b0aa4eba6aa0e576b13d3da4fef204b2af23b48cabf3ff] <==
	* I0813 03:58:07.399547       1 node.go:172] Successfully retrieved node IP: 192.168.49.2
	I0813 03:58:07.399596       1 server_others.go:140] Detected node IP 192.168.49.2
	W0813 03:58:07.399778       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	I0813 03:58:07.425067       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
	I0813 03:58:07.425099       1 server_others.go:212] Using iptables Proxier.
	I0813 03:58:07.425109       1 server_others.go:219] creating dualStackProxier for iptables.
	W0813 03:58:07.425119       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
	I0813 03:58:07.425457       1 server.go:643] Version: v1.21.3
	I0813 03:58:07.427625       1 config.go:315] Starting service config controller
	I0813 03:58:07.427650       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0813 03:58:07.427991       1 config.go:224] Starting endpoint slice config controller
	I0813 03:58:07.428006       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0813 03:58:07.435544       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0813 03:58:07.436631       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0813 03:58:07.528937       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0813 03:58:07.528945       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-proxy [a88ce6123b79a4b15622b22e5632476e410aa53ceb5b001157451f5cbf094f1a] <==
	* 
	* 
	* ==> kube-scheduler [5c910e4a92e8eb692a4276056f2c8c50a1ca29494bbac6881023d7d52b2e1386] <==
	* W0813 03:55:56.097273       1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0813 03:55:56.097300       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0813 03:55:56.191600       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0813 03:55:56.195846       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0813 03:55:56.196111       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0813 03:55:56.196190       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0813 03:55:56.228555       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0813 03:55:56.228900       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 03:55:56.229026       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 03:55:56.229103       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 03:55:56.229161       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 03:55:56.229193       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 03:55:56.229268       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 03:55:56.229331       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0813 03:55:56.229393       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 03:55:56.229444       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 03:55:56.229503       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 03:55:56.229562       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 03:55:56.229636       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 03:55:56.237203       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 03:55:57.072537       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 03:55:57.077376       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0813 03:55:57.197215       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 03:55:57.236162       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0813 03:55:57.597190       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kube-scheduler [e104a7cbdd9d645d0092bd7fbcca35c0baafee253852d4aeb4f17deace17f129] <==
	* E0813 03:57:52.312681       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0813 03:57:52.341189       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0813 03:57:52.603420       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0813 03:57:52.643824       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0813 03:57:52.876699       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0813 03:57:55.475411       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0813 03:57:55.666353       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1beta1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0813 03:57:55.691849       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0813 03:57:56.241525       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0813 03:57:56.332142       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0813 03:57:56.543219       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0813 03:57:56.747992       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0813 03:57:56.823535       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0813 03:57:56.918344       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0813 03:57:57.209501       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0813 03:57:57.360051       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0813 03:57:57.528147       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0813 03:57:58.242831       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0813 03:57:58.316379       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0813 03:58:04.940786       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0813 03:58:04.940992       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 03:58:04.941227       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 03:58:04.941296       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 03:58:04.941438       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0813 03:58:05.686968       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 03:55:02 UTC, end at Fri 2021-08-13 04:01:48 UTC. --
	Aug 13 04:00:03 functional-20210813035500-2022292 kubelet[3887]: E0813 04:00:03.100969    3887 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID=d7118872-ef7f-407a-8690-58e493a5fda0
	Aug 13 04:00:07 functional-20210813035500-2022292 kubelet[3887]: E0813 04:00:07.028554    3887 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8f335768880da6baf72b70c701002b45f4932acae8d574dedfddaf967fc3ac90: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="nginx:latest"
	Aug 13 04:00:07 functional-20210813035500-2022292 kubelet[3887]: E0813 04:00:07.028599    3887 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8f335768880da6baf72b70c701002b45f4932acae8d574dedfddaf967fc3ac90: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="nginx:latest"
	Aug 13 04:00:07 functional-20210813035500-2022292 kubelet[3887]: E0813 04:00:07.028673    3887 kuberuntime_manager.go:864] container &Container{Name:myfrontend,Image:nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mypd,ReadOnly:false,MountPath:/tmp/mount,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-vvqcd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod sp-pod_default(9ba653dc-6c2f-4098-a0c5-51f489d17c03): ErrImagePull: rpc error: code = Unknown desc = faile
d to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8f335768880da6baf72b70c701002b45f4932acae8d574dedfddaf967fc3ac90: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Aug 13 04:00:07 functional-20210813035500-2022292 kubelet[3887]: E0813 04:00:07.028722    3887 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8f335768880da6baf72b70c701002b45f4932acae8d574dedfddaf967fc3ac90: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID=9ba653dc-6c2f-4098-a0c5-51f489d17c03
	Aug 13 04:00:14 functional-20210813035500-2022292 kubelet[3887]: E0813 04:00:14.044304    3887 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:alpine\\\"\"" pod="default/nginx-svc" podUID=d7118872-ef7f-407a-8690-58e493a5fda0
	Aug 13 04:00:20 functional-20210813035500-2022292 kubelet[3887]: E0813 04:00:20.047527    3887 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx\\\"\"" pod="default/sp-pod" podUID=9ba653dc-6c2f-4098-a0c5-51f489d17c03
	Aug 13 04:00:25 functional-20210813035500-2022292 kubelet[3887]: E0813 04:00:25.044630    3887 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:alpine\\\"\"" pod="default/nginx-svc" podUID=d7118872-ef7f-407a-8690-58e493a5fda0
	Aug 13 04:00:31 functional-20210813035500-2022292 kubelet[3887]: E0813 04:00:31.044058    3887 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx\\\"\"" pod="default/sp-pod" podUID=9ba653dc-6c2f-4098-a0c5-51f489d17c03
	Aug 13 04:00:40 functional-20210813035500-2022292 kubelet[3887]: E0813 04:00:40.043875    3887 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:alpine\\\"\"" pod="default/nginx-svc" podUID=d7118872-ef7f-407a-8690-58e493a5fda0
	Aug 13 04:00:44 functional-20210813035500-2022292 kubelet[3887]: E0813 04:00:44.044536    3887 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx\\\"\"" pod="default/sp-pod" podUID=9ba653dc-6c2f-4098-a0c5-51f489d17c03
	Aug 13 04:00:53 functional-20210813035500-2022292 kubelet[3887]: E0813 04:00:53.044181    3887 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:alpine\\\"\"" pod="default/nginx-svc" podUID=d7118872-ef7f-407a-8690-58e493a5fda0
	Aug 13 04:00:57 functional-20210813035500-2022292 kubelet[3887]: E0813 04:00:57.045048    3887 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx\\\"\"" pod="default/sp-pod" podUID=9ba653dc-6c2f-4098-a0c5-51f489d17c03
	Aug 13 04:01:07 functional-20210813035500-2022292 kubelet[3887]: E0813 04:01:07.044347    3887 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:alpine\\\"\"" pod="default/nginx-svc" podUID=d7118872-ef7f-407a-8690-58e493a5fda0
	Aug 13 04:01:11 functional-20210813035500-2022292 kubelet[3887]: E0813 04:01:11.043943    3887 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx\\\"\"" pod="default/sp-pod" podUID=9ba653dc-6c2f-4098-a0c5-51f489d17c03
	Aug 13 04:01:21 functional-20210813035500-2022292 kubelet[3887]: E0813 04:01:21.044728    3887 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx:alpine\\\"\"" pod="default/nginx-svc" podUID=d7118872-ef7f-407a-8690-58e493a5fda0
	Aug 13 04:01:26 functional-20210813035500-2022292 kubelet[3887]: E0813 04:01:26.045033    3887 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"nginx\\\"\"" pod="default/sp-pod" podUID=9ba653dc-6c2f-4098-a0c5-51f489d17c03
	Aug 13 04:01:35 functional-20210813035500-2022292 kubelet[3887]: E0813 04:01:35.101353    3887 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:93be99beb7ac44e27734270778f5a32b7484d1acadbac0a1a33ab100c8b6d5be: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="nginx:alpine"
	Aug 13 04:01:35 functional-20210813035500-2022292 kubelet[3887]: E0813 04:01:35.101396    3887 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:93be99beb7ac44e27734270778f5a32b7484d1acadbac0a1a33ab100c8b6d5be: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="nginx:alpine"
	Aug 13 04:01:35 functional-20210813035500-2022292 kubelet[3887]: E0813 04:01:35.101470    3887 kuberuntime_manager.go:864] container &Container{Name:nginx,Image:nginx:alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pb567,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod nginx-svc_default(d7118872-ef7f-407a-8690-58e493a5fda0): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack
image "docker.io/library/nginx:alpine": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:93be99beb7ac44e27734270778f5a32b7484d1acadbac0a1a33ab100c8b6d5be: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Aug 13 04:01:35 functional-20210813035500-2022292 kubelet[3887]: E0813 04:01:35.101532    3887 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:93be99beb7ac44e27734270778f5a32b7484d1acadbac0a1a33ab100c8b6d5be: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID=d7118872-ef7f-407a-8690-58e493a5fda0
	Aug 13 04:01:37 functional-20210813035500-2022292 kubelet[3887]: E0813 04:01:37.959099    3887 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8f335768880da6baf72b70c701002b45f4932acae8d574dedfddaf967fc3ac90: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="nginx:latest"
	Aug 13 04:01:37 functional-20210813035500-2022292 kubelet[3887]: E0813 04:01:37.959160    3887 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8f335768880da6baf72b70c701002b45f4932acae8d574dedfddaf967fc3ac90: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="nginx:latest"
	Aug 13 04:01:37 functional-20210813035500-2022292 kubelet[3887]: E0813 04:01:37.959250    3887 kuberuntime_manager.go:864] container &Container{Name:myfrontend,Image:nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mypd,ReadOnly:false,MountPath:/tmp/mount,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-vvqcd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod sp-pod_default(9ba653dc-6c2f-4098-a0c5-51f489d17c03): ErrImagePull: rpc error: code = Unknown desc = faile
d to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8f335768880da6baf72b70c701002b45f4932acae8d574dedfddaf967fc3ac90: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Aug 13 04:01:37 functional-20210813035500-2022292 kubelet[3887]: E0813 04:01:37.959313    3887 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8f335768880da6baf72b70c701002b45f4932acae8d574dedfddaf967fc3ac90: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID=9ba653dc-6c2f-4098-a0c5-51f489d17c03
	
	* 
	* ==> storage-provisioner [3a1a0edf6a5b1d4bb86c0b3b4991784cf7c58ba670890eca2a90ea35ca2dcdf3] <==
	* 
	* 
	* ==> storage-provisioner [91a52594035a8d6b57aac421124a4d78971fd074cf9311366b9cba645fcd498a] <==
	* I0813 03:58:06.775878       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 03:58:06.795085       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 03:58:06.795132       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 03:58:24.276123       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 03:58:24.284437       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6ca9d7ff-246b-4db4-ad5e-b640cb0a9f78", APIVersion:"v1", ResourceVersion:"660", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20210813035500-2022292_ea53ef6c-0fbb-4c32-8752-e2e8e8058202 became leader
	I0813 03:58:24.294757       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-20210813035500-2022292_ea53ef6c-0fbb-4c32-8752-e2e8e8058202!
	I0813 03:58:24.397453       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-20210813035500-2022292_ea53ef6c-0fbb-4c32-8752-e2e8e8058202!
	I0813 03:58:46.408120       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0813 03:58:46.408285       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    d77d3bae-9918-46eb-a509-d1b2392ff1a4 489 0 2021-08-13 03:56:14 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2021-08-13 03:56:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-3f6e92df-39cd-4249-94fe-7ea725097595 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  3f6e92df-39cd-4249-94fe-7ea725097595 728 0 2021-08-13 03:58:46 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2021-08-13 03:58:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2021-08-13 03:58:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim

                                                
                                                
	I0813 03:58:46.415126       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-3f6e92df-39cd-4249-94fe-7ea725097595" provisioned
	I0813 03:58:46.415228       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0813 03:58:46.419415       1 volume_store.go:212] Trying to save persistentvolume "pvc-3f6e92df-39cd-4249-94fe-7ea725097595"
	I0813 03:58:46.414841       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"3f6e92df-39cd-4249-94fe-7ea725097595", APIVersion:"v1", ResourceVersion:"728", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0813 03:58:46.428226       1 volume_store.go:219] persistentvolume "pvc-3f6e92df-39cd-4249-94fe-7ea725097595" saved
	I0813 03:58:46.428562       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"3f6e92df-39cd-4249-94fe-7ea725097595", APIVersion:"v1", ResourceVersion:"728", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-3f6e92df-39cd-4249-94fe-7ea725097595
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-20210813035500-2022292 -n functional-20210813035500-2022292
helpers_test.go:262: (dbg) Run:  kubectl --context functional-20210813035500-2022292 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: nginx-svc sp-pod
helpers_test.go:273: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context functional-20210813035500-2022292 describe pod nginx-svc sp-pod
helpers_test.go:281: (dbg) kubectl --context functional-20210813035500-2022292 describe pod nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:         nginx-svc
	Namespace:    default
	Priority:     0
	Node:         functional-20210813035500-2022292/192.168.49.2
	Start Time:   Fri, 13 Aug 2021 03:58:31 +0000
	Labels:       run=nginx-svc
	Annotations:  <none>
	Status:       Pending
	IP:           10.244.0.4
	IPs:
	  IP:  10.244.0.4
	Containers:
	  nginx:
	    Container ID:   
	    Image:          nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pb567 (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  kube-api-access-pb567:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  3m18s                 default-scheduler  Successfully assigned default/nginx-svc to functional-20210813035500-2022292
	  Warning  Failed     3m2s (x2 over 3m16s)  kubelet            Failed to pull image "nginx:alpine": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:93be99beb7ac44e27734270778f5a32b7484d1acadbac0a1a33ab100c8b6d5be: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    107s (x4 over 3m17s)  kubelet            Pulling image "nginx:alpine"
	  Warning  Failed     106s (x4 over 3m16s)  kubelet            Error: ErrImagePull
	  Warning  Failed     106s (x2 over 2m37s)  kubelet            Failed to pull image "nginx:alpine": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     95s (x6 over 3m15s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    84s (x7 over 3m15s)   kubelet            Back-off pulling image "nginx:alpine"
	
	
	Name:         sp-pod
	Namespace:    default
	Priority:     0
	Node:         functional-20210813035500-2022292/192.168.49.2
	Start Time:   Fri, 13 Aug 2021 03:58:46 +0000
	Labels:       test=storage-provisioner
	Annotations:  <none>
	Status:       Pending
	IP:           10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vvqcd (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-vvqcd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m3s                 default-scheduler  Successfully assigned default/sp-pod to functional-20210813035500-2022292
	  Normal   Pulling    103s (x4 over 3m2s)  kubelet            Pulling image "nginx"
	  Warning  Failed     102s (x4 over 3m1s)  kubelet            Failed to pull image "nginx": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8f335768880da6baf72b70c701002b45f4932acae8d574dedfddaf967fc3ac90: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     102s (x4 over 3m1s)  kubelet            Error: ErrImagePull
	  Warning  Failed     78s (x6 over 3m1s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    65s (x7 over 3m1s)   kubelet            Back-off pulling image "nginx"

                                                
                                                
-- /stdout --
helpers_test.go:284: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:285: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (188.20s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (241.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:146: (dbg) Run:  kubectl --context functional-20210813035500-2022292 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:150: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:343: "nginx-svc" [d7118872-ef7f-407a-8690-58e493a5fda0] Pending
helpers_test.go:343: "nginx-svc" [d7118872-ef7f-407a-8690-58e493a5fda0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:150: ***** TestFunctional/parallel/TunnelCmd/serial/WaitService: pod "run=nginx-svc" failed to start within 4m0s: timed out waiting for the condition ****
functional_test_tunnel_test.go:150: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-20210813035500-2022292 -n functional-20210813035500-2022292
functional_test_tunnel_test.go:150: TestFunctional/parallel/TunnelCmd/serial/WaitService: showing logs for failed pods as of 2021-08-13 04:02:31.972462923 +0000 UTC m=+2049.984522108
functional_test_tunnel_test.go:150: (dbg) Run:  kubectl --context functional-20210813035500-2022292 describe po nginx-svc -n default
functional_test_tunnel_test.go:150: (dbg) kubectl --context functional-20210813035500-2022292 describe po nginx-svc -n default:
Name:         nginx-svc
Namespace:    default
Priority:     0
Node:         functional-20210813035500-2022292/192.168.49.2
Start Time:   Fri, 13 Aug 2021 03:58:31 +0000
Labels:       run=nginx-svc
Annotations:  <none>
Status:       Pending
IP:           10.244.0.4
IPs:
IP:  10.244.0.4
Containers:
nginx:
Container ID:   
Image:          nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pb567 (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
kube-api-access-pb567:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  4m1s                   default-scheduler  Successfully assigned default/nginx-svc to functional-20210813035500-2022292
Warning  Failed     3m45s (x2 over 3m59s)  kubelet            Failed to pull image "nginx:alpine": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:93be99beb7ac44e27734270778f5a32b7484d1acadbac0a1a33ab100c8b6d5be: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   Pulling    2m30s (x4 over 4m)     kubelet            Pulling image "nginx:alpine"
Warning  Failed     2m29s (x4 over 3m59s)  kubelet            Error: ErrImagePull
Warning  Failed     2m29s (x2 over 3m20s)  kubelet            Failed to pull image "nginx:alpine": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     2m18s (x6 over 3m58s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    2m7s (x7 over 3m58s)   kubelet            Back-off pulling image "nginx:alpine"
functional_test_tunnel_test.go:150: (dbg) Run:  kubectl --context functional-20210813035500-2022292 logs nginx-svc -n default
functional_test_tunnel_test.go:150: (dbg) Non-zero exit: kubectl --context functional-20210813035500-2022292 logs nginx-svc -n default: exit status 1 (90.860675ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx-svc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:150: kubectl --context functional-20210813035500-2022292 logs nginx-svc -n default: exit status 1
functional_test_tunnel_test.go:151: wait: run=nginx-svc within 4m0s: timed out waiting for the condition
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService (241.09s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (243.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:76: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-20210813035500-2022292 /tmp/mounttest816879912:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:110: wrote "test-1628827311450779352" to /tmp/mounttest816879912/created-by-test
functional_test_mount_test.go:110: wrote "test-1628827311450779352" to /tmp/mounttest816879912/created-by-test-removed-by-pod
functional_test_mount_test.go:110: wrote "test-1628827311450779352" to /tmp/mounttest816879912/test-1628827311450779352
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:118: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-20210813035500-2022292 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (314.576159ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 ssh -- ls -la /mount-9p
functional_test_mount_test.go:136: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 13 04:01 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 13 04:01 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 13 04:01 test-1628827311450779352
functional_test_mount_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 ssh cat /mount-9p/test-1628827311450779352
functional_test_mount_test.go:151: (dbg) Run:  kubectl --context functional-20210813035500-2022292 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:343: "busybox-mount" [4c6a75b5-9361-4a6a-88bf-365c6b1bbd70] Pending
helpers_test.go:343: "busybox-mount" [4c6a75b5-9361-4a6a-88bf-365c6b1bbd70] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E0813 04:02:01.447381 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.crt: no such file or directory
E0813 04:02:29.130780 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:156: ***** TestFunctional/parallel/MountCmd/any-port: pod "integration-test=busybox-mount" failed to start within 4m0s: timed out waiting for the condition ****
functional_test_mount_test.go:156: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-20210813035500-2022292 -n functional-20210813035500-2022292
functional_test_mount_test.go:156: TestFunctional/parallel/MountCmd/any-port: showing logs for failed pods as of 2021-08-13 04:05:53.625683915 +0000 UTC m=+2251.637743108
functional_test_mount_test.go:156: (dbg) Run:  kubectl --context functional-20210813035500-2022292 describe po busybox-mount -n default
functional_test_mount_test.go:156: (dbg) kubectl --context functional-20210813035500-2022292 describe po busybox-mount -n default:
Name:         busybox-mount
Namespace:    default
Priority:     0
Node:         functional-20210813035500-2022292/192.168.49.2
Start Time:   Fri, 13 Aug 2021 04:01:53 +0000
Labels:       integration-test=busybox-mount
Annotations:  <none>
Status:       Pending
IP:           10.244.0.6
IPs:
IP:  10.244.0.6
Containers:
mount-munger:
Container ID:  
Image:         busybox:1.28.4-glibc
Image ID:      
Port:          <none>
Host Port:     <none>
Command:
/bin/sh
-c
--
Args:
cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/mount-9p from test-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5kn7z (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
test-volume:
Type:          HostPath (bare host directory volume)
Path:          /mount-9p
HostPathType:  
kube-api-access-5kn7z:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  4m                     default-scheduler  Successfully assigned default/busybox-mount to functional-20210813035500-2022292
Warning  Failed     3m20s                  kubelet            Failed to pull image "busybox:1.28.4-glibc": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/busybox:1.28.4-glibc": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   Pulling    2m31s (x4 over 4m)     kubelet            Pulling image "busybox:1.28.4-glibc"
Warning  Failed     2m31s (x3 over 3m59s)  kubelet            Failed to pull image "busybox:1.28.4-glibc": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/busybox:1.28.4-glibc": failed to copy: httpReaderSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:bda689514be526d9557ad442312e5d541757c453c50b8cf2ae68597c291385a1: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     2m31s (x4 over 3m59s)  kubelet            Error: ErrImagePull
Warning  Failed     2m16s (x6 over 3m58s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    2m3s (x7 over 3m58s)   kubelet            Back-off pulling image "busybox:1.28.4-glibc"
functional_test_mount_test.go:156: (dbg) Run:  kubectl --context functional-20210813035500-2022292 logs busybox-mount -n default
functional_test_mount_test.go:156: (dbg) Non-zero exit: kubectl --context functional-20210813035500-2022292 logs busybox-mount -n default: exit status 1 (110.051199ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mount-munger" in pod "busybox-mount" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_mount_test.go:156: kubectl --context functional-20210813035500-2022292 logs busybox-mount -n default: exit status 1
functional_test_mount_test.go:157: failed waiting for busybox-mount pod: integration-test=busybox-mount within 4m0s: timed out waiting for the condition
functional_test_mount_test.go:83: "TestFunctional/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:84: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:84: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-20210813035500-2022292 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (283.149383ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=999,access=any,msize=65536,trans=tcp,noextend,port=37473)
	total 2
	-rw-r--r-- 1 docker docker 24 Aug 13 04:01 created-by-test
	-rw-r--r-- 1 docker docker 24 Aug 13 04:01 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Aug 13 04:01 test-1628827311450779352
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:86: debugging command "out/minikube-linux-arm64 -p functional-20210813035500-2022292 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:93: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:97: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-20210813035500-2022292 /tmp/mounttest816879912:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:97: (dbg) [out/minikube-linux-arm64 mount -p functional-20210813035500-2022292 /tmp/mounttest816879912:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/mounttest816879912 into VM as /mount-9p ...
- Mount type:   
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Permissions:  755 (-rwxr-xr-x)
- Options:      map[]
- Bind Address: 192.168.49.1:37473
* Userspace file server: ufs starting
* Successfully mounted /tmp/mounttest816879912 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:97: (dbg) [out/minikube-linux-arm64 mount -p functional-20210813035500-2022292 /tmp/mounttest816879912:/mount-9p --alsologtostderr -v=1] stderr:
I0813 04:01:51.536460 2058826 out.go:298] Setting OutFile to fd 1 ...
I0813 04:01:51.536542 2058826 out.go:345] TERM=,COLORTERM=, which probably does not support color
I0813 04:01:51.536546 2058826 out.go:311] Setting ErrFile to fd 2...
I0813 04:01:51.536550 2058826 out.go:345] TERM=,COLORTERM=, which probably does not support color
I0813 04:01:51.536665 2058826 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
I0813 04:01:51.536827 2058826 mustload.go:65] Loading cluster: functional-20210813035500-2022292
I0813 04:01:51.537566 2058826 cli_runner.go:115] Run: docker container inspect functional-20210813035500-2022292 --format={{.State.Status}}
I0813 04:01:51.584920 2058826 host.go:66] Checking if "functional-20210813035500-2022292" exists ...
I0813 04:01:51.585230 2058826 cli_runner.go:115] Run: docker network inspect functional-20210813035500-2022292 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0813 04:01:51.631435 2058826 out.go:177] * Mounting host path /tmp/mounttest816879912 into VM as /mount-9p ...
I0813 04:01:51.633196 2058826 out.go:177]   - Mount type:   
I0813 04:01:51.635284 2058826 out.go:177]   - User ID:      docker
I0813 04:01:51.637419 2058826 out.go:177]   - Group ID:     docker
I0813 04:01:51.639198 2058826 out.go:177]   - Version:      9p2000.L
I0813 04:01:51.641025 2058826 out.go:177]   - Message Size: 262144
I0813 04:01:51.642712 2058826 out.go:177]   - Permissions:  755 (-rwxr-xr-x)
I0813 04:01:51.645404 2058826 out.go:177]   - Options:      map[]
I0813 04:01:51.647556 2058826 out.go:177]   - Bind Address: 192.168.49.1:37473
I0813 04:01:51.649624 2058826 out.go:177] * Userspace file server: 
I0813 04:01:51.648156 2058826 ssh_runner.go:149] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f /mount-9p || echo "
I0813 04:01:51.649759 2058826 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20210813035500-2022292
I0813 04:01:51.698109 2058826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50813 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/functional-20210813035500-2022292/id_rsa Username:docker}
I0813 04:01:51.802140 2058826 mount.go:169] unmount for /mount-9p ran successfully
I0813 04:01:51.802161 2058826 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -m 755 -p /mount-9p"
I0813 04:01:51.809531 2058826 ssh_runner.go:149] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=37473,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I0813 04:01:51.820073 2058826 main.go:116] stdlog: ufs.go:141 connected
I0813 04:01:51.823664 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Tversion tag 65535 msize 65536 version '9P2000.L'
I0813 04:01:51.823706 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rversion tag 65535 msize 65536 version '9P2000'
I0813 04:01:51.824179 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I0813 04:01:51.824237 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rattach tag 0 aqid (442a0 3dac5d55 'd')
I0813 04:01:51.825784 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Tstat tag 0 fid 0
I0813 04:01:51.825831 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rstat tag 0 st ('mounttest816879912' 'jenkins' 'jenkins' '' q (442a0 3dac5d55 'd') m d700 at 0 mt 1628827311 l 4096 t 0 d 0 ext )
I0813 04:01:51.828261 2058826 mount.go:94] mount successful: ""
I0813 04:01:51.830810 2058826 out.go:177] * Successfully mounted /tmp/mounttest816879912 to /mount-9p
I0813 04:01:51.832784 2058826 out.go:177] 
I0813 04:01:51.834435 2058826 out.go:177] * NOTE: This process must stay alive for the mount to be accessible ...
I0813 04:01:52.533859 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Tstat tag 0 fid 0
I0813 04:01:52.533926 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rstat tag 0 st ('mounttest816879912' 'jenkins' 'jenkins' '' q (442a0 3dac5d55 'd') m d700 at 0 mt 1628827311 l 4096 t 0 d 0 ext )
I0813 04:01:52.795874 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Tstat tag 0 fid 0
I0813 04:01:52.795938 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rstat tag 0 st ('mounttest816879912' 'jenkins' 'jenkins' '' q (442a0 3dac5d55 'd') m d700 at 0 mt 1628827311 l 4096 t 0 d 0 ext )
I0813 04:01:52.796269 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Twalk tag 0 fid 0 newfid 1 
I0813 04:01:52.796301 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rwalk tag 0 
I0813 04:01:52.796410 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Topen tag 0 fid 1 mode 0
I0813 04:01:52.796446 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Ropen tag 0 qid (442a0 3dac5d55 'd') iounit 0
I0813 04:01:52.796558 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Tstat tag 0 fid 0
I0813 04:01:52.796604 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rstat tag 0 st ('mounttest816879912' 'jenkins' 'jenkins' '' q (442a0 3dac5d55 'd') m d700 at 0 mt 1628827311 l 4096 t 0 d 0 ext )
I0813 04:01:52.796715 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Tread tag 0 fid 1 offset 0 count 65512
I0813 04:01:52.796808 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rread tag 0 count 258
I0813 04:01:52.796913 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Tread tag 0 fid 1 offset 258 count 65254
I0813 04:01:52.796946 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rread tag 0 count 0
I0813 04:01:52.797038 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Tread tag 0 fid 1 offset 258 count 65512
I0813 04:01:52.797062 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rread tag 0 count 0
I0813 04:01:52.797167 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I0813 04:01:52.797194 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rwalk tag 0 (442ae 3dac5d55 '') 
I0813 04:01:52.797284 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Tstat tag 0 fid 2
I0813 04:01:52.797311 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (442ae 3dac5d55 '') m 644 at 0 mt 1628827311 l 24 t 0 d 0 ext )
I0813 04:01:52.797424 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Tstat tag 0 fid 2
I0813 04:01:52.797454 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (442ae 3dac5d55 '') m 644 at 0 mt 1628827311 l 24 t 0 d 0 ext )
I0813 04:01:52.797548 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Tclunk tag 0 fid 2
I0813 04:01:52.797568 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rclunk tag 0
I0813 04:01:52.797667 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Twalk tag 0 fid 0 newfid 2 0:'test-1628827311450779352' 
I0813 04:01:52.797698 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rwalk tag 0 (442ce 3dac5d55 '') 
I0813 04:01:52.797800 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Tstat tag 0 fid 2
I0813 04:01:52.797827 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rstat tag 0 st ('test-1628827311450779352' 'jenkins' 'jenkins' '' q (442ce 3dac5d55 '') m 644 at 0 mt 1628827311 l 24 t 0 d 0 ext )
I0813 04:01:52.797920 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Tstat tag 0 fid 2
I0813 04:01:52.797951 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rstat tag 0 st ('test-1628827311450779352' 'jenkins' 'jenkins' '' q (442ce 3dac5d55 '') m 644 at 0 mt 1628827311 l 24 t 0 d 0 ext )
I0813 04:01:52.798040 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Tclunk tag 0 fid 2
I0813 04:01:52.798056 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rclunk tag 0
I0813 04:01:52.798154 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I0813 04:01:52.798184 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rwalk tag 0 (442b0 3dac5d55 '') 
I0813 04:01:52.798268 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Tstat tag 0 fid 2
I0813 04:01:52.798293 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (442b0 3dac5d55 '') m 644 at 0 mt 1628827311 l 24 t 0 d 0 ext )
I0813 04:01:52.798383 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Tstat tag 0 fid 2
I0813 04:01:52.798407 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (442b0 3dac5d55 '') m 644 at 0 mt 1628827311 l 24 t 0 d 0 ext )
I0813 04:01:52.798494 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Tclunk tag 0 fid 2
I0813 04:01:52.798510 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rclunk tag 0
I0813 04:01:52.798608 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Tread tag 0 fid 1 offset 258 count 65512
I0813 04:01:52.798650 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rread tag 0 count 0
I0813 04:01:52.798740 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Tclunk tag 0 fid 1
I0813 04:01:52.798769 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rclunk tag 0
I0813 04:01:53.063457 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Twalk tag 0 fid 0 newfid 1 0:'test-1628827311450779352' 
I0813 04:01:53.063515 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rwalk tag 0 (442ce 3dac5d55 '') 
I0813 04:01:53.063644 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Tstat tag 0 fid 1
I0813 04:01:53.063694 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rstat tag 0 st ('test-1628827311450779352' 'jenkins' 'jenkins' '' q (442ce 3dac5d55 '') m 644 at 0 mt 1628827311 l 24 t 0 d 0 ext )
I0813 04:01:53.070966 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Twalk tag 0 fid 1 newfid 2 
I0813 04:01:53.071004 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rwalk tag 0 
I0813 04:01:53.071117 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Topen tag 0 fid 2 mode 0
I0813 04:01:53.071181 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Ropen tag 0 qid (442ce 3dac5d55 '') iounit 0
I0813 04:01:53.071302 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Tstat tag 0 fid 1
I0813 04:01:53.071347 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rstat tag 0 st ('test-1628827311450779352' 'jenkins' 'jenkins' '' q (442ce 3dac5d55 '') m 644 at 0 mt 1628827311 l 24 t 0 d 0 ext )
I0813 04:01:53.071460 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Tread tag 0 fid 2 offset 0 count 65512
I0813 04:01:53.071497 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rread tag 0 count 24
I0813 04:01:53.071594 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Tread tag 0 fid 2 offset 24 count 65512
I0813 04:01:53.071615 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rread tag 0 count 0
I0813 04:01:53.071732 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Tread tag 0 fid 2 offset 24 count 65512
I0813 04:01:53.071755 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rread tag 0 count 0
I0813 04:01:53.072068 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Tclunk tag 0 fid 2
I0813 04:01:53.072095 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rclunk tag 0
I0813 04:01:53.072315 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Tclunk tag 0 fid 1
I0813 04:01:53.072350 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rclunk tag 0
I0813 04:05:54.086414 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Tstat tag 0 fid 0
I0813 04:05:54.086481 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rstat tag 0 st ('mounttest816879912' 'jenkins' 'jenkins' '' q (442a0 3dac5d55 'd') m d700 at 0 mt 1628827311 l 4096 t 0 d 0 ext )
I0813 04:05:54.086835 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Twalk tag 0 fid 0 newfid 1 
I0813 04:05:54.086863 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rwalk tag 0 
I0813 04:05:54.086970 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Topen tag 0 fid 1 mode 0
I0813 04:05:54.087010 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Ropen tag 0 qid (442a0 3dac5d55 'd') iounit 0
I0813 04:05:54.087109 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Tstat tag 0 fid 0
I0813 04:05:54.087137 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rstat tag 0 st ('mounttest816879912' 'jenkins' 'jenkins' '' q (442a0 3dac5d55 'd') m d700 at 0 mt 1628827311 l 4096 t 0 d 0 ext )
I0813 04:05:54.087247 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Tread tag 0 fid 1 offset 0 count 65512
I0813 04:05:54.087347 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rread tag 0 count 258
I0813 04:05:54.087458 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Tread tag 0 fid 1 offset 258 count 65254
I0813 04:05:54.087479 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rread tag 0 count 0
I0813 04:05:54.087579 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Tread tag 0 fid 1 offset 258 count 65512
I0813 04:05:54.087599 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rread tag 0 count 0
I0813 04:05:54.087702 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I0813 04:05:54.087734 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rwalk tag 0 (442ae 3dac5d55 '') 
I0813 04:05:54.087832 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Tstat tag 0 fid 2
I0813 04:05:54.087861 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (442ae 3dac5d55 '') m 644 at 0 mt 1628827311 l 24 t 0 d 0 ext )
I0813 04:05:54.095181 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Tstat tag 0 fid 2
I0813 04:05:54.095235 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (442ae 3dac5d55 '') m 644 at 0 mt 1628827311 l 24 t 0 d 0 ext )
I0813 04:05:54.095371 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Tclunk tag 0 fid 2
I0813 04:05:54.095392 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rclunk tag 0
I0813 04:05:54.095508 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Twalk tag 0 fid 0 newfid 2 0:'test-1628827311450779352' 
I0813 04:05:54.095543 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rwalk tag 0 (442ce 3dac5d55 '') 
I0813 04:05:54.095640 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Tstat tag 0 fid 2
I0813 04:05:54.095671 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rstat tag 0 st ('test-1628827311450779352' 'jenkins' 'jenkins' '' q (442ce 3dac5d55 '') m 644 at 0 mt 1628827311 l 24 t 0 d 0 ext )
I0813 04:05:54.102987 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Tstat tag 0 fid 2
I0813 04:05:54.103037 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rstat tag 0 st ('test-1628827311450779352' 'jenkins' 'jenkins' '' q (442ce 3dac5d55 '') m 644 at 0 mt 1628827311 l 24 t 0 d 0 ext )
I0813 04:05:54.103178 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Tclunk tag 0 fid 2
I0813 04:05:54.103205 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rclunk tag 0
I0813 04:05:54.103321 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I0813 04:05:54.103358 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rwalk tag 0 (442b0 3dac5d55 '') 
I0813 04:05:54.103478 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Tstat tag 0 fid 2
I0813 04:05:54.103515 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (442b0 3dac5d55 '') m 644 at 0 mt 1628827311 l 24 t 0 d 0 ext )
I0813 04:05:54.110776 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Tstat tag 0 fid 2
I0813 04:05:54.110823 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (442b0 3dac5d55 '') m 644 at 0 mt 1628827311 l 24 t 0 d 0 ext )
I0813 04:05:54.110950 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Tclunk tag 0 fid 2
I0813 04:05:54.110976 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rclunk tag 0
I0813 04:05:54.111076 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Tread tag 0 fid 1 offset 258 count 65512
I0813 04:05:54.111109 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rread tag 0 count 0
I0813 04:05:54.111219 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Tclunk tag 0 fid 1
I0813 04:05:54.111257 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rclunk tag 0
I0813 04:05:54.112543 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I0813 04:05:54.112596 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rerror tag 0 ename 'file not found' ecode 0
I0813 04:05:54.396046 2058826 main.go:116] stdlog: srv_conn.go:133 >>> 192.168.49.2:42018 Tclunk tag 0 fid 0
I0813 04:05:54.396080 2058826 main.go:116] stdlog: srv_conn.go:190 <<< 192.168.49.2:42018 Rclunk tag 0
I0813 04:05:54.412876 2058826 main.go:116] stdlog: ufs.go:147 disconnected
I0813 04:05:54.425163 2058826 out.go:177] * Unmounting /mount-9p ...
I0813 04:05:54.425193 2058826 ssh_runner.go:149] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f /mount-9p || echo "
I0813 04:05:54.432754 2058826 mount.go:169] unmount for /mount-9p ran successfully
I0813 04:05:54.434506 2058826 out.go:177] 
W0813 04:05:54.434632 2058826 out.go:242] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I0813 04:05:54.436506 2058826 out.go:177] 
--- FAIL: TestFunctional/parallel/MountCmd/any-port (243.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (96.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:218: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:220: (dbg) Run:  kubectl --context functional-20210813035500-2022292 get svc nginx-svc
functional_test_tunnel_test.go:224: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)        AGE
nginx-svc   LoadBalancer   10.106.203.203   10.106.203.203   80:31767/TCP   5m37s
functional_test_tunnel_test.go:231: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (96.61s)

                                                
                                    
x
+
TestScheduledStopUnix (106.69s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-20210813042300-2022292 --memory=2048 --driver=docker  --container-runtime=containerd
E0813 04:23:29.758851 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813035500-2022292/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-20210813042300-2022292 --memory=2048 --driver=docker  --container-runtime=containerd: (1m4.836097688s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-20210813042300-2022292 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-20210813042300-2022292 -n scheduled-stop-20210813042300-2022292
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-20210813042300-2022292 --schedule 8s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-20210813042300-2022292 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-20210813042300-2022292 -n scheduled-stop-20210813042300-2022292
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-20210813042300-2022292
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-20210813042300-2022292 --schedule 5s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-20210813042300-2022292
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-20210813042300-2022292: exit status 3 (3.271856771s)

                                                
                                                
-- stdout --
	scheduled-stop-20210813042300-2022292
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 04:24:37.260231 2126557 status.go:374] failed to get storage capacity of /var: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:47000->127.0.0.1:50888: read: connection reset by peer
	E0813 04:24:37.260280 2126557 status.go:258] status error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:47000->127.0.0.1:50888: read: connection reset by peer

                                                
                                                
** /stderr **
scheduled_stop_test.go:209: minikube status: exit status 3

                                                
                                                
-- stdout --
	scheduled-stop-20210813042300-2022292
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 04:24:37.260231 2126557 status.go:374] failed to get storage capacity of /var: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:47000->127.0.0.1:50888: read: connection reset by peer
	E0813 04:24:37.260280 2126557 status.go:258] status error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:47000->127.0.0.1:50888: read: connection reset by peer

                                                
                                                
** /stderr **
panic.go:613: *** TestScheduledStopUnix FAILED at 2021-08-13 04:24:37.263046752 +0000 UTC m=+3375.275105954
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect scheduled-stop-20210813042300-2022292
helpers_test.go:236: (dbg) docker inspect scheduled-stop-20210813042300-2022292:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b6a3b0589a6cf86ee4b3cff1151b9d0c5559ffe7bf1fb301cc317905c610ebb3",
	        "Created": "2021-08-13T04:23:02.080170703Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2124198,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T04:23:02.597423876Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ba5ae658d5b3f017bdb597cc46a1912d5eed54239e31b777788d204fdcbc4445",
	        "ResolvConfPath": "/var/lib/docker/containers/b6a3b0589a6cf86ee4b3cff1151b9d0c5559ffe7bf1fb301cc317905c610ebb3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b6a3b0589a6cf86ee4b3cff1151b9d0c5559ffe7bf1fb301cc317905c610ebb3/hostname",
	        "HostsPath": "/var/lib/docker/containers/b6a3b0589a6cf86ee4b3cff1151b9d0c5559ffe7bf1fb301cc317905c610ebb3/hosts",
	        "LogPath": "/var/lib/docker/containers/b6a3b0589a6cf86ee4b3cff1151b9d0c5559ffe7bf1fb301cc317905c610ebb3/b6a3b0589a6cf86ee4b3cff1151b9d0c5559ffe7bf1fb301cc317905c610ebb3-json.log",
	        "Name": "/scheduled-stop-20210813042300-2022292",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "scheduled-stop-20210813042300-2022292:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "scheduled-stop-20210813042300-2022292",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/33c826e776b035310721a8dd13a0ab7e7c9a9f314eb7c5dd972a2d3745183d94-init/diff:/var/lib/docker/overlay2/7eab3572859d93b266e01c53f7180a9b812a9352d6d9de9a250b7c08853896bd/diff:/var/lib/docker/overlay2/735c75d71cfc18e90e119a4cbda44b5328f80ee140097a56e4b8d56d1d73296a/diff:/var/lib/docker/overlay2/a3e21a33abd0bc635f6c01d5065127b0c6ae8648e27621bc2af8480371e0e000/diff:/var/lib/docker/overlay2/81573b84b43b2908098dbf411f4127aea8745e37aa0ee2f3bcf32f2378aef923/diff:/var/lib/docker/overlay2/633406c91e496c6ee40740050d85641e9c1f2bf787ba64a82f892910362ceeb3/diff:/var/lib/docker/overlay2/deb8d862aaef5e3fc2ec77b3f1839b07c4f6998399f4f111cd38226c004f70b0/diff:/var/lib/docker/overlay2/57b3638e691861d96d431a19402174c1139d2ff0280c08c71a81a8fcf9390e79/diff:/var/lib/docker/overlay2/6e43f99fe3b29b8ef7a4f065a75009878de2e2c2f4298c42eaf887f7602bbc6e/diff:/var/lib/docker/overlay2/cf9d28926b8190588c7af7d8b25156aee75f2abd04071b6e2a0a0fbf2e143dee/diff:/var/lib/docker/overlay2/6aa317
1af6f20f0682732cc4019152e4d5b0846e1ebda0a27c41c772e1cde011/diff:/var/lib/docker/overlay2/868a81f13eb2fedd1a1cb40eaf1c94ba3507a2ce88acff3fbbe9324b52a4b161/diff:/var/lib/docker/overlay2/162214348b4cea5219287565f6d7e0dd459b26bcc50e3db36cf72c667b547528/diff:/var/lib/docker/overlay2/9dbad12bae2f76b71152f7b4515e05d4b998ecec3e6ee896abcec7a80dcd2bea/diff:/var/lib/docker/overlay2/6cabd7857a22f00b0aba07331d6ccd89db9770531c0aa2f6fe5dd0f2cfdf0571/diff:/var/lib/docker/overlay2/d37830ed714a3f12f75bdb0787ab6a0b95fa84f6f2ba7cfce7c0088eae46490b/diff:/var/lib/docker/overlay2/d1f89b0ec8b42bfa6422a1c60a32bf10de45dc549f369f5a7cab728a58edc9f6/diff:/var/lib/docker/overlay2/23f19b760877b914dfe08fbc57f540b6d7a01f94b06b51f27fd6b0307358f0c7/diff:/var/lib/docker/overlay2/a5a77daab231d8d9f6bccde006a207ac55eba70f1221af6acf584668b6732875/diff:/var/lib/docker/overlay2/8d8735d77324b45253a6a19c95ccc69efbb75db0817acd436b005907edf2edcf/diff:/var/lib/docker/overlay2/a7baa651956578e18a5f1b4650eb08a3fde481426f62eca9488d43b89516af4a/diff:/var/lib/d
ocker/overlay2/bce892b3b410ea92f44fedfdc2ee2fa21cfd1fb09da0f3f710f4127436dee1da/diff:/var/lib/docker/overlay2/5fd9b1d93e98bad37f9fb94802b81ef99b54fe312c33006d1efe3e0a4d018218/diff:/var/lib/docker/overlay2/4fa01f36ea63b13ec54182dc384831ff6ba4af27e4e0af13a679984676a4444c/diff:/var/lib/docker/overlay2/63fcd873b6d3120225858a1625cd3b62111df43d3ee0a5fc67083b6912d73a0b/diff:/var/lib/docker/overlay2/2a89e5c9c4b59c0940b10344a4b9bcc69aa162cbdaff6b115404618622a39bf7/diff:/var/lib/docker/overlay2/f08c2886bdfdaf347184cfc06f22457c321676b0bed884791f82f2e3871b640d/diff:/var/lib/docker/overlay2/2f28445803213dc1a6a1b2c687d83ad65dbc018184c663d1f55aa1e8ba26c71c/diff:/var/lib/docker/overlay2/b380dc70af7cf929aaac54e718efbf169fc3994906ab4c15442ddcb1b9973044/diff:/var/lib/docker/overlay2/78fc6ffaa10b2fbce9cefb40ac36aad6ac1d9d90eb27a39dc3316a9c7925b6e9/diff:/var/lib/docker/overlay2/14ee7ddeeb1d52f6956390ca75ff1c67feb8f463a7590e4e021a61251ed42ace/diff:/var/lib/docker/overlay2/99b8cd45c95f310665f0002ff1e8a6932c40fe872e3daa332d0b6f0cc41
f09f7/diff:/var/lib/docker/overlay2/efc742edfe683b14be0e72910049a54bf7b14ac798aa52a5e0f2839e1192b382/diff:/var/lib/docker/overlay2/d038d2ed6aff52af29d17eeb4de8728511045dbe49430059212877f1ae82f24b/diff:/var/lib/docker/overlay2/413fdf0e0da33dff95cacfd58fb4d7eb00b56c1777905c5671426293e1236f21/diff:/var/lib/docker/overlay2/88c5007e3d3e219079cebf81af5c22026c5923305801eacb5affe25b84906e7f/diff:/var/lib/docker/overlay2/e989119af87381d107830638584e78f0bf616a31754948372e177ffcdfb821fb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/33c826e776b035310721a8dd13a0ab7e7c9a9f314eb7c5dd972a2d3745183d94/merged",
	                "UpperDir": "/var/lib/docker/overlay2/33c826e776b035310721a8dd13a0ab7e7c9a9f314eb7c5dd972a2d3745183d94/diff",
	                "WorkDir": "/var/lib/docker/overlay2/33c826e776b035310721a8dd13a0ab7e7c9a9f314eb7c5dd972a2d3745183d94/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "scheduled-stop-20210813042300-2022292",
	                "Source": "/var/lib/docker/volumes/scheduled-stop-20210813042300-2022292/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "scheduled-stop-20210813042300-2022292",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "scheduled-stop-20210813042300-2022292",
	                "name.minikube.sigs.k8s.io": "scheduled-stop-20210813042300-2022292",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8beab318d0a50774423ad480ced4b0351c6b863048b37e5f7c8cdac1e205eb62",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50888"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50887"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50884"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50886"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50885"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/8beab318d0a5",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "scheduled-stop-20210813042300-2022292": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b6a3b0589a6c",
	                        "scheduled-stop-20210813042300-2022292"
	                    ],
	                    "NetworkID": "c999c19722e5c0eadbcb50afc577f0aea1a578213204d4810e3d157e09f2169b",
	                    "EndpointID": "c4737bda3c4e417d44135f39b574c19f5a753a6531a7a49514f52fb1f3a756c6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-20210813042300-2022292 -n scheduled-stop-20210813042300-2022292
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-20210813042300-2022292 -n scheduled-stop-20210813042300-2022292: exit status 3 (3.283124175s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 04:24:40.577230 2126595 status.go:374] failed to get storage capacity of /var: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:47036->127.0.0.1:50888: read: connection reset by peer
	E0813 04:24:40.577252 2126595 status.go:247] status error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:47036->127.0.0.1:50888: read: connection reset by peer

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 3 (may be ok)
helpers_test.go:242: "scheduled-stop-20210813042300-2022292" host is not running, skipping log retrieval (state="Error")
helpers_test.go:176: Cleaning up "scheduled-stop-20210813042300-2022292" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-20210813042300-2022292
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-20210813042300-2022292: (6.917549844s)
--- FAIL: TestScheduledStopUnix (106.69s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (4.45s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  /tmp/minikube-v1.16.0.463998703.exe start -p running-upgrade-20210813042716-2022292 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:128: (dbg) Non-zero exit: /tmp/minikube-v1.16.0.463998703.exe start -p running-upgrade-20210813042716-2022292 --memory=2200 --vm-driver=docker  --container-runtime=containerd: exit status 65 (108.552329ms)

                                                
                                                
-- stdout --
	* [running-upgrade-20210813042716-2022292] minikube v1.16.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=12230
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - KUBECONFIG=/tmp/legacy_kubeconfig090480002
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	* Exiting due to PROVIDER_DOCKER_NOT_FOUND: The 'docker' provider was not found: docker driver is not supported on "arm64" systems yet
	* Suggestion: Try other drivers
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/

                                                
                                                
** /stderr **
version_upgrade_test.go:128: (dbg) Run:  /tmp/minikube-v1.16.0.463998703.exe start -p running-upgrade-20210813042716-2022292 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:128: (dbg) Non-zero exit: /tmp/minikube-v1.16.0.463998703.exe start -p running-upgrade-20210813042716-2022292 --memory=2200 --vm-driver=docker  --container-runtime=containerd: exit status 65 (112.106954ms)

                                                
                                                
-- stdout --
	* [running-upgrade-20210813042716-2022292] minikube v1.16.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=12230
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - KUBECONFIG=/tmp/legacy_kubeconfig585032953
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	* Exiting due to PROVIDER_DOCKER_NOT_FOUND: The 'docker' provider was not found: docker driver is not supported on "arm64" systems yet
	* Suggestion: Try other drivers
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/

                                                
                                                
** /stderr **
version_upgrade_test.go:128: (dbg) Run:  /tmp/minikube-v1.16.0.463998703.exe start -p running-upgrade-20210813042716-2022292 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:128: (dbg) Non-zero exit: /tmp/minikube-v1.16.0.463998703.exe start -p running-upgrade-20210813042716-2022292 --memory=2200 --vm-driver=docker  --container-runtime=containerd: exit status 65 (100.445957ms)

                                                
                                                
-- stdout --
	* [running-upgrade-20210813042716-2022292] minikube v1.16.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=12230
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - KUBECONFIG=/tmp/legacy_kubeconfig968541700
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	* Exiting due to PROVIDER_DOCKER_NOT_FOUND: The 'docker' provider was not found: docker driver is not supported on "arm64" systems yet
	* Suggestion: Try other drivers
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/

                                                
                                                
** /stderr **
version_upgrade_test.go:134: legacy v1.16.0 start failed: exit status 65
panic.go:613: *** TestRunningBinaryUpgrade FAILED at 2021-08-13 04:27:20.070450406 +0000 UTC m=+3538.082509599
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect running-upgrade-20210813042716-2022292
helpers_test.go:232: (dbg) Non-zero exit: docker inspect running-upgrade-20210813042716-2022292: exit status 1 (54.361859ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: running-upgrade-20210813042716-2022292

                                                
                                                
** /stderr **
helpers_test.go:234: failed to get docker inspect: exit status 1
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-20210813042716-2022292 -n running-upgrade-20210813042716-2022292
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-20210813042716-2022292 -n running-upgrade-20210813042716-2022292: exit status 85 (60.219385ms)

                                                
                                                
-- stdout --
	* Profile "running-upgrade-20210813042716-2022292" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p running-upgrade-20210813042716-2022292"

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 85 (may be ok)
helpers_test.go:242: "running-upgrade-20210813042716-2022292" host is not running, skipping log retrieval (state="* Profile \"running-upgrade-20210813042716-2022292\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p running-upgrade-20210813042716-2022292\"")
helpers_test.go:176: Cleaning up "running-upgrade-20210813042716-2022292" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-20210813042716-2022292
--- FAIL: TestRunningBinaryUpgrade (4.45s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade (4.82s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade
=== PAUSE TestStoppedBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade
version_upgrade_test.go:186: (dbg) Run:  /tmp/minikube-v1.16.0.636168843.exe start -p stopped-upgrade-20210813042711-2022292 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:186: (dbg) Non-zero exit: /tmp/minikube-v1.16.0.636168843.exe start -p stopped-upgrade-20210813042711-2022292 --memory=2200 --vm-driver=docker  --container-runtime=containerd: exit status 65 (128.349309ms)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20210813042711-2022292] minikube v1.16.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=12230
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - KUBECONFIG=/tmp/legacy_kubeconfig065437294
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	* Exiting due to PROVIDER_DOCKER_NOT_FOUND: The 'docker' provider was not found: docker driver is not supported on "arm64" systems yet
	* Suggestion: Try other drivers
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/

                                                
                                                
** /stderr **
version_upgrade_test.go:186: (dbg) Run:  /tmp/minikube-v1.16.0.636168843.exe start -p stopped-upgrade-20210813042711-2022292 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:186: (dbg) Non-zero exit: /tmp/minikube-v1.16.0.636168843.exe start -p stopped-upgrade-20210813042711-2022292 --memory=2200 --vm-driver=docker  --container-runtime=containerd: exit status 65 (79.925887ms)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20210813042711-2022292] minikube v1.16.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=12230
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - KUBECONFIG=/tmp/legacy_kubeconfig655073013
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	* Exiting due to PROVIDER_DOCKER_NOT_FOUND: The 'docker' provider was not found: docker driver is not supported on "arm64" systems yet
	* Suggestion: Try other drivers
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/

                                                
                                                
** /stderr **
version_upgrade_test.go:186: (dbg) Run:  /tmp/minikube-v1.16.0.636168843.exe start -p stopped-upgrade-20210813042711-2022292 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:186: (dbg) Non-zero exit: /tmp/minikube-v1.16.0.636168843.exe start -p stopped-upgrade-20210813042711-2022292 --memory=2200 --vm-driver=docker  --container-runtime=containerd: exit status 65 (84.615488ms)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20210813042711-2022292] minikube v1.16.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=12230
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - KUBECONFIG=/tmp/legacy_kubeconfig972629968
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	* Exiting due to PROVIDER_DOCKER_NOT_FOUND: The 'docker' provider was not found: docker driver is not supported on "arm64" systems yet
	* Suggestion: Try other drivers
	* Documentation: https://minikube.sigs.k8s.io/docs/drivers/docker/

                                                
                                                
** /stderr **
version_upgrade_test.go:192: legacy v1.16.0 start failed: exit status 65
panic.go:613: *** TestStoppedBinaryUpgrade FAILED at 2021-08-13 04:27:15.642455446 +0000 UTC m=+3533.654514648
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStoppedBinaryUpgrade]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect stopped-upgrade-20210813042711-2022292
helpers_test.go:232: (dbg) Non-zero exit: docker inspect stopped-upgrade-20210813042711-2022292: exit status 1 (61.485664ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: stopped-upgrade-20210813042711-2022292

                                                
                                                
** /stderr **
helpers_test.go:234: failed to get docker inspect: exit status 1
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p stopped-upgrade-20210813042711-2022292 -n stopped-upgrade-20210813042711-2022292
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p stopped-upgrade-20210813042711-2022292 -n stopped-upgrade-20210813042711-2022292: exit status 85 (56.588468ms)

                                                
                                                
-- stdout --
	* Profile "stopped-upgrade-20210813042711-2022292" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p stopped-upgrade-20210813042711-2022292"

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 85 (may be ok)
helpers_test.go:242: "stopped-upgrade-20210813042711-2022292" host is not running, skipping log retrieval (state="* Profile \"stopped-upgrade-20210813042711-2022292\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p stopped-upgrade-20210813042711-2022292\"")
helpers_test.go:176: Cleaning up "stopped-upgrade-20210813042711-2022292" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p stopped-upgrade-20210813042711-2022292
--- FAIL: TestStoppedBinaryUpgrade (4.82s)

                                                
                                    
x
+
TestMissingContainerUpgrade (81.92s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:311: (dbg) Run:  /tmp/minikube-v1.9.1.309512924.exe start -p missing-upgrade-20210813042509-2022292 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:311: (dbg) Non-zero exit: /tmp/minikube-v1.9.1.309512924.exe start -p missing-upgrade-20210813042509-2022292 --memory=2200 --driver=docker  --container-runtime=containerd: exit status 70 (59.951118192s)

                                                
                                                
-- stdout --
	! [missing-upgrade-20210813042509-2022292] minikube v1.9.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=12230
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	* Using the docker driver based on user configuration
	* Starting control plane node m01 in cluster missing-upgrade-20210813042509-2022292
	* Pulling base image ...
	* Creating Kubernetes in docker container with (CPUs=2) (2 available), Memory=2200MB (7845MB available) ...
	* Deleting "missing-upgrade-20210813042509-2022292" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (2 available), Memory=2200MB (7845MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.22.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.22.0
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	! StartHost failed, but will try again: creating host: create: creating: create kic node: check container "missing-upgrade-20210813042509-2022292" running: temporary error created container "missing-upgrade-20210813042509-2022292" is not running yet
	* 
	X Failed to start docker container. "minikube start -p missing-upgrade-20210813042509-2022292" may fix it.: creating host: create: creating: create kic node: check container "missing-upgrade-20210813042509-2022292" running: temporary error created container "missing-upgrade-20210813042509-2022292" is not running yet
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:311: (dbg) Run:  /tmp/minikube-v1.9.1.309512924.exe start -p missing-upgrade-20210813042509-2022292 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:311: (dbg) Non-zero exit: /tmp/minikube-v1.9.1.309512924.exe start -p missing-upgrade-20210813042509-2022292 --memory=2200 --driver=docker  --container-runtime=containerd: exit status 70 (6.355694198s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20210813042509-2022292] minikube v1.9.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=12230
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-20210813042509-2022292
	* Pulling base image ...
	* Restarting existing docker container for "missing-upgrade-20210813042509-2022292" ...
	* Restarting existing docker container for "missing-upgrade-20210813042509-2022292" ...

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: get ssh host-port: get host-bind port 22 for "missing-upgrade-20210813042509-2022292", output 
	Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
	: exit status 1
	* 
	X Failed to start docker container. "minikube start -p missing-upgrade-20210813042509-2022292" may fix it.: provision: get ssh host-port: get host-bind port 22 for "missing-upgrade-20210813042509-2022292", output 
	Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:311: (dbg) Run:  /tmp/minikube-v1.9.1.309512924.exe start -p missing-upgrade-20210813042509-2022292 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:311: (dbg) Non-zero exit: /tmp/minikube-v1.9.1.309512924.exe start -p missing-upgrade-20210813042509-2022292 --memory=2200 --driver=docker  --container-runtime=containerd: exit status 70 (6.444260103s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20210813042509-2022292] minikube v1.9.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=12230
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-20210813042509-2022292
	* Pulling base image ...
	* Restarting existing docker container for "missing-upgrade-20210813042509-2022292" ...
	* Restarting existing docker container for "missing-upgrade-20210813042509-2022292" ...

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: get ssh host-port: get host-bind port 22 for "missing-upgrade-20210813042509-2022292", output 
	Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
	: exit status 1
	* 
	X Failed to start docker container. "minikube start -p missing-upgrade-20210813042509-2022292" may fix it.: provision: get ssh host-port: get host-bind port 22 for "missing-upgrade-20210813042509-2022292", output 
	Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
	: exit status 1
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:317: release start failed: exit status 70
panic.go:613: *** TestMissingContainerUpgrade FAILED at 2021-08-13 04:26:26.87475519 +0000 UTC m=+3484.886814392
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect missing-upgrade-20210813042509-2022292
helpers_test.go:236: (dbg) docker inspect missing-upgrade-20210813042509-2022292:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed5d86d3cd3fc42dc45be42aa53382a1cc78da0cac182689043a6715553c7761",
	        "Created": "2021-08-13T04:25:49.085537549Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 1,
	            "Error": "",
	            "StartedAt": "2021-08-13T04:26:26.689811418Z",
	            "FinishedAt": "2021-08-13T04:26:26.689313009Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/ed5d86d3cd3fc42dc45be42aa53382a1cc78da0cac182689043a6715553c7761/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed5d86d3cd3fc42dc45be42aa53382a1cc78da0cac182689043a6715553c7761/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed5d86d3cd3fc42dc45be42aa53382a1cc78da0cac182689043a6715553c7761/hosts",
	        "LogPath": "/var/lib/docker/containers/ed5d86d3cd3fc42dc45be42aa53382a1cc78da0cac182689043a6715553c7761/ed5d86d3cd3fc42dc45be42aa53382a1cc78da0cac182689043a6715553c7761-json.log",
	        "Name": "/missing-upgrade-20210813042509-2022292",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-20210813042509-2022292:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d30d35160e1280e0ff7604e95027f5c9ec6f5b7fc26c8ac7fc1dc6768e409db6-init/diff:/var/lib/docker/overlay2/283bb542aec9deb3b8966a4f27920af8006a3d4f19630ad6bdb3b6945a68edac/diff:/var/lib/docker/overlay2/13a736bd6f06bcd554052a1ca335dd0c257430ff3bbd72221a9d79de4714e14e/diff:/var/lib/docker/overlay2/01554d1fe6cd20d5f6eadc5afa20f64845458886124414387d0fac87121a2923/diff:/var/lib/docker/overlay2/863de27399c21dfdc70837b2e9ca40e651c80daabb03458fc6b986938c641df9/diff:/var/lib/docker/overlay2/75d8a20618ae933bc0012a5fa4691cad054b464b14e81036243bed99b0eea260/diff:/var/lib/docker/overlay2/2326cd3ffdcb3bd4d71694eac01753868b48053d35c4e2adcbc7f0b9a32ecd85/diff:/var/lib/docker/overlay2/28949507e68becf5c73fd2a2279a706cf5aa65344e705fc085fa5ed4cdef8bfc/diff:/var/lib/docker/overlay2/996141a7a7a8a6093ca1d203aa5d967a9e97cfbb304a4dc40be56a6f9dc7f756/diff:/var/lib/docker/overlay2/9b434f771469f7ee48dd6539e52aee435b65760c9066d777bc70eba7292dc6ed/diff:/var/lib/docker/overlay2/58ff1d
79077a5905707835bcd9a41192d08a4301b3d85d0a642c1e3d6ba3d18f/diff:/var/lib/docker/overlay2/faa0bee1de0f8ea6fbcec2285d8a61e0177a9c3c2e1a5458188d402b819089fe/diff:/var/lib/docker/overlay2/e43444d74125d6dca56898ddd688d6549b68914291e01db359e29888f0555844/diff:/var/lib/docker/overlay2/cdfcf7c9b6e0659745d1b29cc86d4a444e1c29a2b9f3d84c2b1de7a86b412e6a/diff:/var/lib/docker/overlay2/bd6d510c59bf9bb452c4c966e9ecf4ba83c35c890beecd3a2e7d4b2474358203/diff:/var/lib/docker/overlay2/74cbc1b58b5a5af3b8f9b790a3fb670a56e4b93a73e2a77dba1315618417f610/diff:/var/lib/docker/overlay2/23ee9207f54a7a6e817be285bb147f9bbb630ed859d9f27262173ae6f6350adf/diff:/var/lib/docker/overlay2/f2d6f98d60301b32758e7f71ca2532b1cc085359389827e6441966881cd73086/diff:/var/lib/docker/overlay2/627829829fe26f0a010eb09b9ed969e30c4bacedb7f15618d9b204e2ac8935cb/diff:/var/lib/docker/overlay2/e715da7ed94d9e84c98c6cada5d2bba3f0f4f2eea66f96d61e82493303f0bf8e/diff:/var/lib/docker/overlay2/06bbc0ae5d80815a51ab73c0306faf46e3efa16cb9038b63f869875eec350be3/diff:/var/lib/d
ocker/overlay2/87d892d94cfc5c25d4f06fa0768bdc929ebe0161f51da7098735e0032eb67af3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d30d35160e1280e0ff7604e95027f5c9ec6f5b7fc26c8ac7fc1dc6768e409db6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d30d35160e1280e0ff7604e95027f5c9ec6f5b7fc26c8ac7fc1dc6768e409db6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d30d35160e1280e0ff7604e95027f5c9ec6f5b7fc26c8ac7fc1dc6768e409db6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-20210813042509-2022292",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-20210813042509-2022292/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-20210813042509-2022292",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-20210813042509-2022292",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-20210813042509-2022292",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "34203a929656f8dbff5e0e458e1ff112b09dc195598980c8374193121a488e45",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {},
	            "SandboxKey": "/var/run/docker/netns/34203a929656",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "50a45794cb65086e7ebc7055d6d286cb4e97a03cdc2174bea9239bc0909469d0",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-20210813042509-2022292 -n missing-upgrade-20210813042509-2022292
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-20210813042509-2022292 -n missing-upgrade-20210813042509-2022292: exit status 7 (92.581321ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 7 (may be ok)
helpers_test.go:242: "missing-upgrade-20210813042509-2022292" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:176: Cleaning up "missing-upgrade-20210813042509-2022292" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-20210813042509-2022292
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-20210813042509-2022292: (4.797292246s)
--- FAIL: TestMissingContainerUpgrade (81.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (14.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-20210813043048-2022292 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-20210813043048-2022292 -n old-k8s-version-20210813043048-2022292
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-20210813043048-2022292 -n old-k8s-version-20210813043048-2022292: exit status 2 (329.447204ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-20210813043048-2022292 -n old-k8s-version-20210813043048-2022292
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-20210813043048-2022292 -n old-k8s-version-20210813043048-2022292: exit status 2 (314.853473ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-20210813043048-2022292 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-20210813043048-2022292 -n old-k8s-version-20210813043048-2022292
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-20210813043048-2022292 -n old-k8s-version-20210813043048-2022292
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-20210813043048-2022292 -n old-k8s-version-20210813043048-2022292: exit status 2 (385.888078ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: post-unpause kubelet status = "Stopped"; want = "Running"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect old-k8s-version-20210813043048-2022292
helpers_test.go:236: (dbg) docker inspect old-k8s-version-20210813043048-2022292:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "16119f769e1b2aa43ac152ae352326a4eae76d26fe1b877acf0f3b92e2bbf305",
	        "Created": "2021-08-13T04:30:50.696946161Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2156782,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T04:33:44.239454282Z",
	            "FinishedAt": "2021-08-13T04:33:42.828998558Z"
	        },
	        "Image": "sha256:ba5ae658d5b3f017bdb597cc46a1912d5eed54239e31b777788d204fdcbc4445",
	        "ResolvConfPath": "/var/lib/docker/containers/16119f769e1b2aa43ac152ae352326a4eae76d26fe1b877acf0f3b92e2bbf305/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/16119f769e1b2aa43ac152ae352326a4eae76d26fe1b877acf0f3b92e2bbf305/hostname",
	        "HostsPath": "/var/lib/docker/containers/16119f769e1b2aa43ac152ae352326a4eae76d26fe1b877acf0f3b92e2bbf305/hosts",
	        "LogPath": "/var/lib/docker/containers/16119f769e1b2aa43ac152ae352326a4eae76d26fe1b877acf0f3b92e2bbf305/16119f769e1b2aa43ac152ae352326a4eae76d26fe1b877acf0f3b92e2bbf305-json.log",
	        "Name": "/old-k8s-version-20210813043048-2022292",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-20210813043048-2022292:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20210813043048-2022292",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d1c77a4413065856e02c35ff2475a1c410daceb87303ee3c597287b4ce79c9ce-init/diff:/var/lib/docker/overlay2/7eab3572859d93b266e01c53f7180a9b812a9352d6d9de9a250b7c08853896bd/diff:/var/lib/docker/overlay2/735c75d71cfc18e90e119a4cbda44b5328f80ee140097a56e4b8d56d1d73296a/diff:/var/lib/docker/overlay2/a3e21a33abd0bc635f6c01d5065127b0c6ae8648e27621bc2af8480371e0e000/diff:/var/lib/docker/overlay2/81573b84b43b2908098dbf411f4127aea8745e37aa0ee2f3bcf32f2378aef923/diff:/var/lib/docker/overlay2/633406c91e496c6ee40740050d85641e9c1f2bf787ba64a82f892910362ceeb3/diff:/var/lib/docker/overlay2/deb8d862aaef5e3fc2ec77b3f1839b07c4f6998399f4f111cd38226c004f70b0/diff:/var/lib/docker/overlay2/57b3638e691861d96d431a19402174c1139d2ff0280c08c71a81a8fcf9390e79/diff:/var/lib/docker/overlay2/6e43f99fe3b29b8ef7a4f065a75009878de2e2c2f4298c42eaf887f7602bbc6e/diff:/var/lib/docker/overlay2/cf9d28926b8190588c7af7d8b25156aee75f2abd04071b6e2a0a0fbf2e143dee/diff:/var/lib/docker/overlay2/6aa317
1af6f20f0682732cc4019152e4d5b0846e1ebda0a27c41c772e1cde011/diff:/var/lib/docker/overlay2/868a81f13eb2fedd1a1cb40eaf1c94ba3507a2ce88acff3fbbe9324b52a4b161/diff:/var/lib/docker/overlay2/162214348b4cea5219287565f6d7e0dd459b26bcc50e3db36cf72c667b547528/diff:/var/lib/docker/overlay2/9dbad12bae2f76b71152f7b4515e05d4b998ecec3e6ee896abcec7a80dcd2bea/diff:/var/lib/docker/overlay2/6cabd7857a22f00b0aba07331d6ccd89db9770531c0aa2f6fe5dd0f2cfdf0571/diff:/var/lib/docker/overlay2/d37830ed714a3f12f75bdb0787ab6a0b95fa84f6f2ba7cfce7c0088eae46490b/diff:/var/lib/docker/overlay2/d1f89b0ec8b42bfa6422a1c60a32bf10de45dc549f369f5a7cab728a58edc9f6/diff:/var/lib/docker/overlay2/23f19b760877b914dfe08fbc57f540b6d7a01f94b06b51f27fd6b0307358f0c7/diff:/var/lib/docker/overlay2/a5a77daab231d8d9f6bccde006a207ac55eba70f1221af6acf584668b6732875/diff:/var/lib/docker/overlay2/8d8735d77324b45253a6a19c95ccc69efbb75db0817acd436b005907edf2edcf/diff:/var/lib/docker/overlay2/a7baa651956578e18a5f1b4650eb08a3fde481426f62eca9488d43b89516af4a/diff:/var/lib/d
ocker/overlay2/bce892b3b410ea92f44fedfdc2ee2fa21cfd1fb09da0f3f710f4127436dee1da/diff:/var/lib/docker/overlay2/5fd9b1d93e98bad37f9fb94802b81ef99b54fe312c33006d1efe3e0a4d018218/diff:/var/lib/docker/overlay2/4fa01f36ea63b13ec54182dc384831ff6ba4af27e4e0af13a679984676a4444c/diff:/var/lib/docker/overlay2/63fcd873b6d3120225858a1625cd3b62111df43d3ee0a5fc67083b6912d73a0b/diff:/var/lib/docker/overlay2/2a89e5c9c4b59c0940b10344a4b9bcc69aa162cbdaff6b115404618622a39bf7/diff:/var/lib/docker/overlay2/f08c2886bdfdaf347184cfc06f22457c321676b0bed884791f82f2e3871b640d/diff:/var/lib/docker/overlay2/2f28445803213dc1a6a1b2c687d83ad65dbc018184c663d1f55aa1e8ba26c71c/diff:/var/lib/docker/overlay2/b380dc70af7cf929aaac54e718efbf169fc3994906ab4c15442ddcb1b9973044/diff:/var/lib/docker/overlay2/78fc6ffaa10b2fbce9cefb40ac36aad6ac1d9d90eb27a39dc3316a9c7925b6e9/diff:/var/lib/docker/overlay2/14ee7ddeeb1d52f6956390ca75ff1c67feb8f463a7590e4e021a61251ed42ace/diff:/var/lib/docker/overlay2/99b8cd45c95f310665f0002ff1e8a6932c40fe872e3daa332d0b6f0cc41
f09f7/diff:/var/lib/docker/overlay2/efc742edfe683b14be0e72910049a54bf7b14ac798aa52a5e0f2839e1192b382/diff:/var/lib/docker/overlay2/d038d2ed6aff52af29d17eeb4de8728511045dbe49430059212877f1ae82f24b/diff:/var/lib/docker/overlay2/413fdf0e0da33dff95cacfd58fb4d7eb00b56c1777905c5671426293e1236f21/diff:/var/lib/docker/overlay2/88c5007e3d3e219079cebf81af5c22026c5923305801eacb5affe25b84906e7f/diff:/var/lib/docker/overlay2/e989119af87381d107830638584e78f0bf616a31754948372e177ffcdfb821fb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d1c77a4413065856e02c35ff2475a1c410daceb87303ee3c597287b4ce79c9ce/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d1c77a4413065856e02c35ff2475a1c410daceb87303ee3c597287b4ce79c9ce/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d1c77a4413065856e02c35ff2475a1c410daceb87303ee3c597287b4ce79c9ce/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20210813043048-2022292",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20210813043048-2022292/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20210813043048-2022292",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20210813043048-2022292",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20210813043048-2022292",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "32fb91282f37640e435ab3a6c90145af713320496634fa170e71c73f3eb1adfb",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50961"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50960"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50957"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50959"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50958"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/32fb91282f37",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20210813043048-2022292": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "16119f769e1b",
	                        "old-k8s-version-20210813043048-2022292"
	                    ],
	                    "NetworkID": "aefa95d69f7e8d3baaf5a16d71f467c48c007729ab8d8d3c94f20112d49ed093",
	                    "EndpointID": "0d7250e0b50a3a263ace567820e57db3ffea8a8b8e4f001dd6da8735cf754b97",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-20210813043048-2022292 -n old-k8s-version-20210813043048-2022292
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-20210813043048-2022292 -n old-k8s-version-20210813043048-2022292: exit status 2 (338.515296ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-20210813043048-2022292 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-20210813043048-2022292 logs -n 25: (3.786828202s)
helpers_test.go:253: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                  Profile                  |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|-------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| -p      | force-systemd-flag-20210813042828-2022292         | force-systemd-flag-20210813042828-2022292 | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:30:02 UTC | Fri, 13 Aug 2021 04:30:02 UTC |
	|         | ssh cat /etc/containerd/config.toml               |                                           |         |         |                               |                               |
	| delete  | -p                                                | force-systemd-flag-20210813042828-2022292 | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:30:02 UTC | Fri, 13 Aug 2021 04:30:09 UTC |
	|         | force-systemd-flag-20210813042828-2022292         |                                           |         |         |                               |                               |
	| start   | -p                                                | kubernetes-upgrade-20210813042631-2022292 | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:29:30 UTC | Fri, 13 Aug 2021 04:30:45 UTC |
	|         | kubernetes-upgrade-20210813042631-2022292         |                                           |         |         |                               |                               |
	|         | --memory=2200                                     |                                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker            |                                           |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                           |         |         |                               |                               |
	| delete  | -p                                                | kubernetes-upgrade-20210813042631-2022292 | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:30:45 UTC | Fri, 13 Aug 2021 04:30:48 UTC |
	|         | kubernetes-upgrade-20210813042631-2022292         |                                           |         |         |                               |                               |
	| start   | -p                                                | cert-options-20210813043009-2022292       | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:30:09 UTC | Fri, 13 Aug 2021 04:31:30 UTC |
	|         | cert-options-20210813043009-2022292               |                                           |         |         |                               |                               |
	|         | --memory=2048                                     |                                           |         |         |                               |                               |
	|         | --apiserver-ips=127.0.0.1                         |                                           |         |         |                               |                               |
	|         | --apiserver-ips=192.168.15.15                     |                                           |         |         |                               |                               |
	|         | --apiserver-names=localhost                       |                                           |         |         |                               |                               |
	|         | --apiserver-names=www.google.com                  |                                           |         |         |                               |                               |
	|         | --apiserver-port=8555                             |                                           |         |         |                               |                               |
	|         | --driver=docker                                   |                                           |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                           |         |         |                               |                               |
	| -p      | cert-options-20210813043009-2022292               | cert-options-20210813043009-2022292       | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:31:30 UTC | Fri, 13 Aug 2021 04:31:30 UTC |
	|         | ssh openssl x509 -text -noout -in                 |                                           |         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt             |                                           |         |         |                               |                               |
	| delete  | -p                                                | cert-options-20210813043009-2022292       | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:31:31 UTC | Fri, 13 Aug 2021 04:31:33 UTC |
	|         | cert-options-20210813043009-2022292               |                                           |         |         |                               |                               |
	| start   | -p                                                | no-preload-20210813043133-2022292         | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:31:33 UTC | Fri, 13 Aug 2021 04:33:08 UTC |
	|         | no-preload-20210813043133-2022292                 |                                           |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                           |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                           |         |         |                               |                               |
	|         | --driver=docker                                   |                                           |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                           |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210813043048-2022292    | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:30:49 UTC | Fri, 13 Aug 2021 04:33:13 UTC |
	|         | old-k8s-version-20210813043048-2022292            |                                           |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                           |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                           |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                           |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                           |         |         |                               |                               |
	|         | --keep-context=false --driver=docker              |                                           |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                           |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20210813043133-2022292         | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:33:17 UTC | Fri, 13 Aug 2021 04:33:17 UTC |
	|         | no-preload-20210813043133-2022292                 |                                           |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                           |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                           |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | old-k8s-version-20210813043048-2022292    | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:33:22 UTC | Fri, 13 Aug 2021 04:33:22 UTC |
	|         | old-k8s-version-20210813043048-2022292            |                                           |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                           |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                           |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20210813043133-2022292         | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:33:18 UTC | Fri, 13 Aug 2021 04:33:38 UTC |
	|         | no-preload-20210813043133-2022292                 |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                           |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20210813043133-2022292         | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:33:38 UTC | Fri, 13 Aug 2021 04:33:38 UTC |
	|         | no-preload-20210813043133-2022292                 |                                           |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                           |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20210813043048-2022292    | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:33:22 UTC | Fri, 13 Aug 2021 04:33:43 UTC |
	|         | old-k8s-version-20210813043048-2022292            |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                           |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210813043048-2022292    | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:33:43 UTC | Fri, 13 Aug 2021 04:33:43 UTC |
	|         | old-k8s-version-20210813043048-2022292            |                                           |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                           |         |         |                               |                               |
	| start   | -p                                                | no-preload-20210813043133-2022292         | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:33:38 UTC | Fri, 13 Aug 2021 04:39:46 UTC |
	|         | no-preload-20210813043133-2022292                 |                                           |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                           |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                           |         |         |                               |                               |
	|         | --driver=docker                                   |                                           |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                           |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210813043048-2022292    | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:33:43 UTC | Fri, 13 Aug 2021 04:39:55 UTC |
	|         | old-k8s-version-20210813043048-2022292            |                                           |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                           |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                           |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                           |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                           |         |         |                               |                               |
	|         | --keep-context=false --driver=docker              |                                           |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                           |         |         |                               |                               |
	| ssh     | -p                                                | no-preload-20210813043133-2022292         | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:39:57 UTC | Fri, 13 Aug 2021 04:39:57 UTC |
	|         | no-preload-20210813043133-2022292                 |                                           |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                           |         |         |                               |                               |
	| pause   | -p                                                | no-preload-20210813043133-2022292         | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:39:57 UTC | Fri, 13 Aug 2021 04:39:57 UTC |
	|         | no-preload-20210813043133-2022292                 |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=1                            |                                           |         |         |                               |                               |
	| unpause | -p                                                | no-preload-20210813043133-2022292         | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:39:58 UTC | Fri, 13 Aug 2021 04:39:59 UTC |
	|         | no-preload-20210813043133-2022292                 |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=1                            |                                           |         |         |                               |                               |
	| delete  | -p                                                | no-preload-20210813043133-2022292         | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:39:59 UTC | Fri, 13 Aug 2021 04:40:02 UTC |
	|         | no-preload-20210813043133-2022292                 |                                           |         |         |                               |                               |
	| delete  | -p                                                | no-preload-20210813043133-2022292         | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:40:02 UTC | Fri, 13 Aug 2021 04:40:03 UTC |
	|         | no-preload-20210813043133-2022292                 |                                           |         |         |                               |                               |
	| ssh     | -p                                                | old-k8s-version-20210813043048-2022292    | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:40:05 UTC | Fri, 13 Aug 2021 04:40:06 UTC |
	|         | old-k8s-version-20210813043048-2022292            |                                           |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                           |         |         |                               |                               |
	| pause   | -p                                                | old-k8s-version-20210813043048-2022292    | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:40:06 UTC | Fri, 13 Aug 2021 04:40:07 UTC |
	|         | old-k8s-version-20210813043048-2022292            |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=1                            |                                           |         |         |                               |                               |
	| unpause | -p                                                | old-k8s-version-20210813043048-2022292    | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:40:07 UTC | Fri, 13 Aug 2021 04:40:08 UTC |
	|         | old-k8s-version-20210813043048-2022292            |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=1                            |                                           |         |         |                               |                               |
	|---------|---------------------------------------------------|-------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 04:40:03
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.16.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 04:40:03.219073 2168352 out.go:298] Setting OutFile to fd 1 ...
	I0813 04:40:03.219154 2168352 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 04:40:03.219158 2168352 out.go:311] Setting ErrFile to fd 2...
	I0813 04:40:03.219162 2168352 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 04:40:03.219291 2168352 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0813 04:40:03.219567 2168352 out.go:305] Setting JSON to false
	I0813 04:40:03.220640 2168352 start.go:111] hostinfo: {"hostname":"ip-172-31-30-239","uptime":51747,"bootTime":1628777856,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0813 04:40:03.220730 2168352 start.go:121] virtualization:  
	I0813 04:40:03.223402 2168352 out.go:177] * [embed-certs-20210813044003-2022292] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0813 04:40:03.225587 2168352 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 04:40:03.224409 2168352 notify.go:169] Checking for updates...
	I0813 04:40:03.230119 2168352 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 04:40:03.236528 2168352 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0813 04:40:03.238537 2168352 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0813 04:40:03.239095 2168352 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 04:40:03.289945 2168352 docker.go:132] docker version: linux-20.10.8
	I0813 04:40:03.290055 2168352 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 04:40:03.404596 2168352 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:19 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-13 04:40:03.335246776 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0813 04:40:03.404706 2168352 docker.go:244] overlay module found
	I0813 04:40:03.407368 2168352 out.go:177] * Using the docker driver based on user configuration
	I0813 04:40:03.407388 2168352 start.go:278] selected driver: docker
	I0813 04:40:03.407394 2168352 start.go:751] validating driver "docker" against <nil>
	I0813 04:40:03.407408 2168352 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 04:40:03.407459 2168352 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 04:40:03.407532 2168352 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 04:40:03.409336 2168352 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 04:40:03.409639 2168352 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 04:40:03.492229 2168352 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:19 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-13 04:40:03.437117915 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0813 04:40:03.492363 2168352 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 04:40:03.492534 2168352 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 04:40:03.492555 2168352 cni.go:93] Creating CNI manager for ""
	I0813 04:40:03.492562 2168352 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 04:40:03.492573 2168352 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0813 04:40:03.492584 2168352 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0813 04:40:03.492589 2168352 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0813 04:40:03.492594 2168352 start_flags.go:277] config:
	{Name:embed-certs-20210813044003-2022292 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210813044003-2022292 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:contain
erd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 04:40:03.495899 2168352 out.go:177] * Starting control plane node embed-certs-20210813044003-2022292 in cluster embed-certs-20210813044003-2022292
	I0813 04:40:03.495932 2168352 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0813 04:40:03.497592 2168352 out.go:177] * Pulling base image ...
	I0813 04:40:03.497612 2168352 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 04:40:03.497639 2168352 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4
	I0813 04:40:03.497660 2168352 cache.go:56] Caching tarball of preloaded images
	I0813 04:40:03.497803 2168352 preload.go:173] Found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0813 04:40:03.497825 2168352 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0813 04:40:03.497921 2168352 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/embed-certs-20210813044003-2022292/config.json ...
	I0813 04:40:03.497945 2168352 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/embed-certs-20210813044003-2022292/config.json: {Name:mk2d80c480fb3c99468a410a03e79e972723594b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 04:40:03.498092 2168352 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0813 04:40:03.532033 2168352 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0813 04:40:03.532062 2168352 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0813 04:40:03.532083 2168352 cache.go:205] Successfully downloaded all kic artifacts
	I0813 04:40:03.532113 2168352 start.go:313] acquiring machines lock for embed-certs-20210813044003-2022292: {Name:mk1beb8b5d17dc1771955505ec31b4ed70ab6178 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 04:40:03.532642 2168352 start.go:317] acquired machines lock for "embed-certs-20210813044003-2022292" in 509.271µs
	I0813 04:40:03.532672 2168352 start.go:89] Provisioning new machine with config: &{Name:embed-certs-20210813044003-2022292 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210813044003-2022292 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 04:40:03.532748 2168352 start.go:126] createHost starting for "" (driver="docker")
	I0813 04:40:03.535863 2168352 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0813 04:40:03.536086 2168352 start.go:160] libmachine.API.Create for "embed-certs-20210813044003-2022292" (driver="docker")
	I0813 04:40:03.536115 2168352 client.go:168] LocalClient.Create starting
	I0813 04:40:03.536168 2168352 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem
	I0813 04:40:03.536194 2168352 main.go:130] libmachine: Decoding PEM data...
	I0813 04:40:03.536219 2168352 main.go:130] libmachine: Parsing certificate...
	I0813 04:40:03.536345 2168352 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem
	I0813 04:40:03.536368 2168352 main.go:130] libmachine: Decoding PEM data...
	I0813 04:40:03.536383 2168352 main.go:130] libmachine: Parsing certificate...
	I0813 04:40:03.536740 2168352 cli_runner.go:115] Run: docker network inspect embed-certs-20210813044003-2022292 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0813 04:40:03.565509 2168352 cli_runner.go:162] docker network inspect embed-certs-20210813044003-2022292 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0813 04:40:03.565579 2168352 network_create.go:255] running [docker network inspect embed-certs-20210813044003-2022292] to gather additional debugging logs...
	I0813 04:40:03.565593 2168352 cli_runner.go:115] Run: docker network inspect embed-certs-20210813044003-2022292
	W0813 04:40:03.597241 2168352 cli_runner.go:162] docker network inspect embed-certs-20210813044003-2022292 returned with exit code 1
	I0813 04:40:03.597265 2168352 network_create.go:258] error running [docker network inspect embed-certs-20210813044003-2022292]: docker network inspect embed-certs-20210813044003-2022292: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20210813044003-2022292
	I0813 04:40:03.597287 2168352 network_create.go:260] output of [docker network inspect embed-certs-20210813044003-2022292]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20210813044003-2022292
	
	** /stderr **
	I0813 04:40:03.597346 2168352 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 04:40:03.631553 2168352 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0x4000bee598] misses:0}
	I0813 04:40:03.631598 2168352 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 04:40:03.631618 2168352 network_create.go:106] attempt to create docker network embed-certs-20210813044003-2022292 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0813 04:40:03.631665 2168352 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20210813044003-2022292
	I0813 04:40:03.703263 2168352 network_create.go:90] docker network embed-certs-20210813044003-2022292 192.168.49.0/24 created
	I0813 04:40:03.703289 2168352 kic.go:106] calculated static IP "192.168.49.2" for the "embed-certs-20210813044003-2022292" container
	I0813 04:40:03.703359 2168352 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0813 04:40:03.732790 2168352 cli_runner.go:115] Run: docker volume create embed-certs-20210813044003-2022292 --label name.minikube.sigs.k8s.io=embed-certs-20210813044003-2022292 --label created_by.minikube.sigs.k8s.io=true
	I0813 04:40:03.762318 2168352 oci.go:102] Successfully created a docker volume embed-certs-20210813044003-2022292
	I0813 04:40:03.762394 2168352 cli_runner.go:115] Run: docker run --rm --name embed-certs-20210813044003-2022292-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20210813044003-2022292 --entrypoint /usr/bin/test -v embed-certs-20210813044003-2022292:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
	I0813 04:40:04.394293 2168352 oci.go:106] Successfully prepared a docker volume embed-certs-20210813044003-2022292
	W0813 04:40:04.394342 2168352 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0813 04:40:04.394350 2168352 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0813 04:40:04.394403 2168352 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0813 04:40:04.394606 2168352 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 04:40:04.394625 2168352 kic.go:179] Starting extracting preloaded images to volume ...
	I0813 04:40:04.394669 2168352 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-20210813044003-2022292:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0813 04:40:04.541808 2168352 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-20210813044003-2022292 --name embed-certs-20210813044003-2022292 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20210813044003-2022292 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-20210813044003-2022292 --network embed-certs-20210813044003-2022292 --ip 192.168.49.2 --volume embed-certs-20210813044003-2022292:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0813 04:40:05.070834 2168352 cli_runner.go:115] Run: docker container inspect embed-certs-20210813044003-2022292 --format={{.State.Running}}
	I0813 04:40:05.123775 2168352 cli_runner.go:115] Run: docker container inspect embed-certs-20210813044003-2022292 --format={{.State.Status}}
	I0813 04:40:05.180752 2168352 cli_runner.go:115] Run: docker exec embed-certs-20210813044003-2022292 stat /var/lib/dpkg/alternatives/iptables
	I0813 04:40:05.273964 2168352 oci.go:278] the created container "embed-certs-20210813044003-2022292" has a running status.
	I0813 04:40:05.273993 2168352 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/embed-certs-20210813044003-2022292/id_rsa...
	I0813 04:40:05.943409 2168352 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/embed-certs-20210813044003-2022292/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0813 04:40:06.125660 2168352 cli_runner.go:115] Run: docker container inspect embed-certs-20210813044003-2022292 --format={{.State.Status}}
	I0813 04:40:06.202340 2168352 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0813 04:40:06.202354 2168352 kic_runner.go:115] Args: [docker exec --privileged embed-certs-20210813044003-2022292 chown docker:docker /home/docker/.ssh/authorized_keys]
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID
	02ccfef662e0d       523cad1a4df73       About a minute ago   Exited              dashboard-metrics-scraper   5                   960542ed32a3c
	e11993011a0ac       85e6c0cff043f       4 minutes ago        Running             kubernetes-dashboard        0                   ae04aff957f34
	915c40344b615       ba04bb24b9575       5 minutes ago        Running             storage-provisioner         2                   942d12e08bf17
	7a9bd6f5dfdb6       7e8edeee9a1e7       5 minutes ago        Running             coredns                     0                   c9acbb8f87b14
	d85e3151586c4       1611cd07b61d5       5 minutes ago        Running             busybox                     1                   bf2f61e77e5ab
	2cab4a5a3763e       239d456d2eb64       5 minutes ago        Running             kube-proxy                  1                   6e0ea29b0ef16
	3a77288a95c8f       ba04bb24b9575       5 minutes ago        Exited              storage-provisioner         1                   942d12e08bf17
	628392b45a06c       f37b7c809e5dc       5 minutes ago        Running             kindnet-cni                 1                   a3176be0a5d01
	9a2b26f090b88       ad99d3ead043f       6 minutes ago        Running             etcd                        1                   cd5400f12a4f7
	5f4fe9ac4b5ec       c303a8bf065e7       6 minutes ago        Running             kube-scheduler              1                   de11ff4464f65
	2cc7d2beeb7ce       1c225a51d1163       6 minutes ago        Running             kube-controller-manager     0                   6b283da941f27
	d341dc7c44990       61c4f4cdad81d       6 minutes ago        Running             kube-apiserver              1                   146b3f751bede
	dfbbc0e066836       1611cd07b61d5       6 minutes ago        Exited              busybox                     0                   87c06e7639f49
	f97ff7f944d49       239d456d2eb64       7 minutes ago        Exited              kube-proxy                  0                   df02a319054c5
	b92bdf938c94f       f37b7c809e5dc       7 minutes ago        Exited              kindnet-cni                 0                   011ae768e4d31
	ae7249685fc33       c303a8bf065e7       8 minutes ago        Exited              kube-scheduler              0                   dd718da5c6566
	ede934da6daa0       ad99d3ead043f       8 minutes ago        Exited              etcd                        0                   6ec38dc64d06a
	fa0f286b4358e       61c4f4cdad81d       8 minutes ago        Exited              kube-apiserver              0                   57141bbb26626
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-08-13 04:33:44 UTC, end at Fri 2021-08-13 04:40:11 UTC. --
	Aug 13 04:39:23 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:39:23.356912850Z" level=info msg="Finish piping \"stdout\" of container exec \"a6283438d5c0c9b97cdfc7a71d0d53359a90dbc332cbb532f3594ac8704c4604\""
	Aug 13 04:39:23 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:39:23.356954080Z" level=info msg="Exec process \"a6283438d5c0c9b97cdfc7a71d0d53359a90dbc332cbb532f3594ac8704c4604\" exits with exit code 0 and error <nil>"
	Aug 13 04:39:23 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:39:23.358513696Z" level=info msg="ExecSync for \"9a2b26f090b88c5b99180a1fdc55c6247d8641f719be70bb66886f837b95636d\" returns with exit code 0"
	Aug 13 04:39:33 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:39:33.237349201Z" level=info msg="ExecSync for \"9a2b26f090b88c5b99180a1fdc55c6247d8641f719be70bb66886f837b95636d\" with command [/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/var/lib/minikube/certs/etcd/ca.crt --cert=/var/lib/minikube/certs/etcd/healthcheck-client.crt --key=/var/lib/minikube/certs/etcd/healthcheck-client.key get foo] and timeout 15 (s)"
	Aug 13 04:39:33 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:39:33.471804743Z" level=info msg="Finish piping \"stderr\" of container exec \"9b628023d87b0936b51aebd5c15356f1a73becf0c6c589e6048fb43d025464f2\""
	Aug 13 04:39:33 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:39:33.471930625Z" level=info msg="Finish piping \"stdout\" of container exec \"9b628023d87b0936b51aebd5c15356f1a73becf0c6c589e6048fb43d025464f2\""
	Aug 13 04:39:33 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:39:33.472275425Z" level=info msg="Exec process \"9b628023d87b0936b51aebd5c15356f1a73becf0c6c589e6048fb43d025464f2\" exits with exit code 0 and error <nil>"
	Aug 13 04:39:33 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:39:33.473673065Z" level=info msg="ExecSync for \"9a2b26f090b88c5b99180a1fdc55c6247d8641f719be70bb66886f837b95636d\" returns with exit code 0"
	Aug 13 04:39:43 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:39:43.229960693Z" level=info msg="ExecSync for \"9a2b26f090b88c5b99180a1fdc55c6247d8641f719be70bb66886f837b95636d\" with command [/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/var/lib/minikube/certs/etcd/ca.crt --cert=/var/lib/minikube/certs/etcd/healthcheck-client.crt --key=/var/lib/minikube/certs/etcd/healthcheck-client.key get foo] and timeout 15 (s)"
	Aug 13 04:39:43 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:39:43.458590790Z" level=info msg="Finish piping \"stderr\" of container exec \"9868f3dab02bf16da83776d55888e6d4f7346cd1f64540abb17a34f546d61e50\""
	Aug 13 04:39:43 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:39:43.458848035Z" level=info msg="Finish piping \"stdout\" of container exec \"9868f3dab02bf16da83776d55888e6d4f7346cd1f64540abb17a34f546d61e50\""
	Aug 13 04:39:43 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:39:43.459392448Z" level=info msg="Exec process \"9868f3dab02bf16da83776d55888e6d4f7346cd1f64540abb17a34f546d61e50\" exits with exit code 0 and error <nil>"
	Aug 13 04:39:43 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:39:43.460593438Z" level=info msg="ExecSync for \"9a2b26f090b88c5b99180a1fdc55c6247d8641f719be70bb66886f837b95636d\" returns with exit code 0"
	Aug 13 04:39:53 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:39:53.229655239Z" level=info msg="ExecSync for \"9a2b26f090b88c5b99180a1fdc55c6247d8641f719be70bb66886f837b95636d\" with command [/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/var/lib/minikube/certs/etcd/ca.crt --cert=/var/lib/minikube/certs/etcd/healthcheck-client.crt --key=/var/lib/minikube/certs/etcd/healthcheck-client.key get foo] and timeout 15 (s)"
	Aug 13 04:39:53 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:39:53.360415656Z" level=info msg="Finish piping \"stderr\" of container exec \"ddf858d23c5889f40da18393e480db0f8d32b83b67608e165f9a21b224a1a338\""
	Aug 13 04:39:53 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:39:53.360598177Z" level=info msg="Finish piping \"stdout\" of container exec \"ddf858d23c5889f40da18393e480db0f8d32b83b67608e165f9a21b224a1a338\""
	Aug 13 04:39:53 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:39:53.360712728Z" level=info msg="Exec process \"ddf858d23c5889f40da18393e480db0f8d32b83b67608e165f9a21b224a1a338\" exits with exit code 0 and error <nil>"
	Aug 13 04:39:53 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:39:53.362021679Z" level=info msg="ExecSync for \"9a2b26f090b88c5b99180a1fdc55c6247d8641f719be70bb66886f837b95636d\" returns with exit code 0"
	Aug 13 04:40:03 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:40:03.229687443Z" level=info msg="ExecSync for \"9a2b26f090b88c5b99180a1fdc55c6247d8641f719be70bb66886f837b95636d\" with command [/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/var/lib/minikube/certs/etcd/ca.crt --cert=/var/lib/minikube/certs/etcd/healthcheck-client.crt --key=/var/lib/minikube/certs/etcd/healthcheck-client.key get foo] and timeout 15 (s)"
	Aug 13 04:40:03 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:40:03.395947504Z" level=info msg="Finish piping \"stderr\" of container exec \"42a293dd41ee4b515071f8e6691589979aaa47962287eac562ea022b80bc42bd\""
	Aug 13 04:40:03 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:40:03.396236116Z" level=info msg="Finish piping \"stdout\" of container exec \"42a293dd41ee4b515071f8e6691589979aaa47962287eac562ea022b80bc42bd\""
	Aug 13 04:40:03 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:40:03.396641126Z" level=info msg="Exec process \"42a293dd41ee4b515071f8e6691589979aaa47962287eac562ea022b80bc42bd\" exits with exit code 0 and error <nil>"
	Aug 13 04:40:03 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:40:03.397909027Z" level=info msg="ExecSync for \"9a2b26f090b88c5b99180a1fdc55c6247d8641f719be70bb66886f837b95636d\" returns with exit code 0"
	Aug 13 04:40:08 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:40:08.869062941Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Aug 13 04:40:10 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:40:10.364791490Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	
	* 
	* ==> coredns [7a9bd6f5dfdb6d03edd105f1154ed065355e637e0ff0ce6300dbac8f7c2a65bb] <==
	* .:53
	2021-08-13T04:34:40.307Z [INFO] CoreDNS-1.3.1
	2021-08-13T04:34:40.307Z [INFO] linux/arm64, go1.11.4, 6b56a9c
	CoreDNS-1.3.1
	linux/arm64, go1.11.4, 6b56a9c
	2021-08-13T04:34:40.307Z [INFO] plugin/reload: Running configuration MD5 = 84554e3bcd896bd44d28b54cbac27490
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20210813043048-2022292
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-20210813043048-2022292
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dc1c3ca26e9449ce488a773126b8450402c94a19
	                    minikube.k8s.io/name=old-k8s-version-20210813043048-2022292
	                    minikube.k8s.io/updated_at=2021_08_13T04_31_56_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 04:31:48 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 04:40:10 +0000   Fri, 13 Aug 2021 04:31:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 04:40:10 +0000   Fri, 13 Aug 2021 04:31:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 04:40:10 +0000   Fri, 13 Aug 2021 04:31:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 13 Aug 2021 04:40:10 +0000   Fri, 13 Aug 2021 04:40:08 +0000   KubeletNotReady              container runtime status check may not have completed yet.
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    old-k8s-version-20210813043048-2022292
	Capacity:
	 cpu:                2
	 ephemeral-storage:  40474572Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 hugepages-32Mi:     0
	 hugepages-64Ki:     0
	 memory:             8033460Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  40474572Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 hugepages-32Mi:     0
	 hugepages-64Ki:     0
	 memory:             8033460Ki
	 pods:               110
	System Info:
	 Machine ID:                 80c525a0c99c4bf099c0cbf9c365b032
	 System UUID:                d621cc27-9f13-443a-bda2-3bdb78571685
	 Boot ID:                    0b91f2d0-31de-4b03-9973-67e3d0024ffb
	 Kernel Version:             5.8.0-1041-aws
	 OS Image:                   Ubuntu 20.04.2 LTS
	 Operating System:           linux
	 Architecture:               arm64
	 Container Runtime Version:  containerd://1.4.6
	 Kubelet Version:            v1.14.0
	 Kube-Proxy Version:         v1.14.0
	PodCIDR:                     10.244.0.0/24
	Non-terminated Pods:         (12 in total)
	  Namespace                  Name                                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                                              ------------  ----------  ---------------  -------------  ---
	  default                    busybox                                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m58s
	  kube-system                coredns-fb8b8dccf-cfbng                                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     5m32s
	  kube-system                etcd-old-k8s-version-20210813043048-2022292                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m18s
	  kube-system                kindnet-f6spd                                                     100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      8m3s
	  kube-system                kube-apiserver-old-k8s-version-20210813043048-2022292             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m29s
	  kube-system                kube-controller-manager-old-k8s-version-20210813043048-2022292    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m48s
	  kube-system                kube-proxy-9hh9m                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m3s
	  kube-system                kube-scheduler-old-k8s-version-20210813043048-2022292             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m34s
	  kube-system                metrics-server-8546d8b77b-snn6r                                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      300Mi (3%!)(MISSING)       0 (0%!)(MISSING)         5m32s
	  kube-system                storage-provisioner                                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m
	  kubernetes-dashboard       dashboard-metrics-scraper-5b494cc544-n6ssn                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m52s
	  kubernetes-dashboard       kubernetes-dashboard-5d8978d65d-xmnmv                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             420Mi (5%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                            Age                    From                                                Message
	  ----     ------                            ----                   ----                                                -------
	  Normal   NodeHasSufficientMemory           8m39s (x8 over 8m39s)  kubelet, old-k8s-version-20210813043048-2022292     Node old-k8s-version-20210813043048-2022292 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure             8m39s (x7 over 8m39s)  kubelet, old-k8s-version-20210813043048-2022292     Node old-k8s-version-20210813043048-2022292 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID              8m39s (x8 over 8m39s)  kubelet, old-k8s-version-20210813043048-2022292     Node old-k8s-version-20210813043048-2022292 status is now: NodeHasSufficientPID
	  Normal   Starting                          7m54s                  kube-proxy, old-k8s-version-20210813043048-2022292  Starting kube-proxy.
	  Normal   Starting                          6m5s                   kubelet, old-k8s-version-20210813043048-2022292     Starting kubelet.
	  Normal   NodeHasSufficientMemory           6m5s (x8 over 6m5s)    kubelet, old-k8s-version-20210813043048-2022292     Node old-k8s-version-20210813043048-2022292 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure             6m5s (x8 over 6m5s)    kubelet, old-k8s-version-20210813043048-2022292     Node old-k8s-version-20210813043048-2022292 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID              6m5s (x7 over 6m5s)    kubelet, old-k8s-version-20210813043048-2022292     Node old-k8s-version-20210813043048-2022292 status is now: NodeHasSufficientPID
	  Normal   Starting                          5m47s                  kube-proxy, old-k8s-version-20210813043048-2022292  Starting kube-proxy.
	  Warning  FailedNodeAllocatableEnforcement  65s (x6 over 6m5s)     kubelet, old-k8s-version-20210813043048-2022292     Failed to update Node Allocatable Limits ["kubepods"]: failed to set supported cgroup subsystems for cgroup [kubepods]: Failed to set config for supported subsystems : failed to write 0 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/hugetlb.64kB.limit_in_bytes: permission denied
	  Normal   Starting                          3s                     kubelet, old-k8s-version-20210813043048-2022292     Starting kubelet.
	  Normal   NodeHasSufficientMemory           3s                     kubelet, old-k8s-version-20210813043048-2022292     Node old-k8s-version-20210813043048-2022292 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure             3s                     kubelet, old-k8s-version-20210813043048-2022292     Node old-k8s-version-20210813043048-2022292 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID              3s                     kubelet, old-k8s-version-20210813043048-2022292     Node old-k8s-version-20210813043048-2022292 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady                      3s                     kubelet, old-k8s-version-20210813043048-2022292     Node old-k8s-version-20210813043048-2022292 status is now: NodeNotReady
	  Normal   Starting                          1s                     kubelet, old-k8s-version-20210813043048-2022292     Starting kubelet.
	  Normal   NodeHasSufficientMemory           1s                     kubelet, old-k8s-version-20210813043048-2022292     Node old-k8s-version-20210813043048-2022292 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure             1s                     kubelet, old-k8s-version-20210813043048-2022292     Node old-k8s-version-20210813043048-2022292 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID              1s                     kubelet, old-k8s-version-20210813043048-2022292     Node old-k8s-version-20210813043048-2022292 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* [  +0.001061] FS-Cache: O-key=[8] 'ce42040000000000'
	[  +0.000800] FS-Cache: N-cookie c=00000000f491aea5 [p=00000000dc37798f fl=2 nc=0 na=1]
	[  +0.001313] FS-Cache: N-cookie d=0000000029214e1b n=00000000a36cc3bc
	[  +0.001052] FS-Cache: N-key=[8] 'ce42040000000000'
	[Aug13 04:05] FS-Cache: Duplicate cookie detected
	[  +0.000797] FS-Cache: O-cookie c=0000000049ef8e94 [p=00000000dc37798f fl=226 nc=0 na=1]
	[  +0.001324] FS-Cache: O-cookie d=0000000029214e1b n=0000000070677f7f
	[  +0.001053] FS-Cache: O-key=[8] 'ae42040000000000'
	[  +0.000801] FS-Cache: N-cookie c=000000000ce39a3e [p=00000000dc37798f fl=2 nc=0 na=1]
	[  +0.001320] FS-Cache: N-cookie d=0000000029214e1b n=000000005ea55429
	[  +0.001052] FS-Cache: N-key=[8] 'ae42040000000000'
	[  +0.001492] FS-Cache: Duplicate cookie detected
	[  +0.000804] FS-Cache: O-cookie c=00000000ea269615 [p=00000000dc37798f fl=226 nc=0 na=1]
	[  +0.001313] FS-Cache: O-cookie d=0000000029214e1b n=000000000e250366
	[  +0.001056] FS-Cache: O-key=[8] 'ce42040000000000'
	[  +0.000797] FS-Cache: N-cookie c=000000000ce39a3e [p=00000000dc37798f fl=2 nc=0 na=1]
	[  +0.001309] FS-Cache: N-cookie d=0000000029214e1b n=000000001f865425
	[  +0.001050] FS-Cache: N-key=[8] 'ce42040000000000'
	[  +0.001469] FS-Cache: Duplicate cookie detected
	[  +0.000798] FS-Cache: O-cookie c=000000001a114129 [p=00000000dc37798f fl=226 nc=0 na=1]
	[  +0.001324] FS-Cache: O-cookie d=0000000029214e1b n=0000000016cfc1b9
	[  +0.001049] FS-Cache: O-key=[8] 'b042040000000000'
	[  +0.000800] FS-Cache: N-cookie c=000000000ce39a3e [p=00000000dc37798f fl=2 nc=0 na=1]
	[  +0.001305] FS-Cache: N-cookie d=0000000029214e1b n=00000000e873df18
	[  +0.001054] FS-Cache: N-key=[8] 'b042040000000000'
	
	* 
	* ==> etcd [9a2b26f090b88c5b99180a1fdc55c6247d8641f719be70bb66886f837b95636d] <==
	* 2021-08-13 04:34:08.784797 I | etcdserver: advertise client URLs = https://192.168.58.2:2379
	2021-08-13 04:34:08.840468 I | etcdserver: restarting member b2c6679ac05f2cf1 in cluster 3a56e4ca95e2355c at commit index 517
	2021-08-13 04:34:08.840518 I | raft: b2c6679ac05f2cf1 became follower at term 2
	2021-08-13 04:34:08.840530 I | raft: newRaft b2c6679ac05f2cf1 [peers: [], term: 2, commit: 517, applied: 0, lastindex: 517, lastterm: 2]
	2021-08-13 04:34:12.837427 W | auth: simple token is not cryptographically signed
	2021-08-13 04:34:12.839902 I | etcdserver: starting server... [version: 3.3.10, cluster version: to_be_decided]
	2021-08-13 04:34:12.842481 I | etcdserver/membership: added member b2c6679ac05f2cf1 [https://192.168.58.2:2380] to cluster 3a56e4ca95e2355c
	2021-08-13 04:34:12.842554 N | etcdserver/membership: set the initial cluster version to 3.3
	2021-08-13 04:34:12.842577 I | etcdserver/api: enabled capabilities for version 3.3
	2021-08-13 04:34:12.844893 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-13 04:34:12.845095 I | embed: listening for metrics on http://192.168.58.2:2381
	2021-08-13 04:34:12.845251 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-08-13 04:34:14.274882 I | raft: b2c6679ac05f2cf1 is starting a new election at term 2
	2021-08-13 04:34:14.274963 I | raft: b2c6679ac05f2cf1 became candidate at term 3
	2021-08-13 04:34:14.274995 I | raft: b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 3
	2021-08-13 04:34:14.280396 I | raft: b2c6679ac05f2cf1 became leader at term 3
	2021-08-13 04:34:14.280429 I | raft: raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 3
	2021-08-13 04:34:14.292375 I | etcdserver: published {Name:old-k8s-version-20210813043048-2022292 ClientURLs:[https://192.168.58.2:2379]} to cluster 3a56e4ca95e2355c
	2021-08-13 04:34:14.292445 I | embed: ready to serve client requests
	2021-08-13 04:34:14.292505 I | embed: ready to serve client requests
	2021-08-13 04:34:14.294070 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-13 04:34:14.362178 I | embed: serving client requests on 192.168.58.2:2379
	proto: no coders for int
	proto: no encoder for ValueSize int [GetProperties]
	2021-08-13 04:40:10.492537 W | etcdserver: read-only range request "key:\"/registry/minions/old-k8s-version-20210813043048-2022292\" " with result "range_response_count:1 size:3742" took too long (113.844927ms) to execute
	
	* 
	* ==> etcd [ede934da6daa0251467435d6134da5714d2aaaa246642ee4b092bf23b20b8a4d] <==
	* 2021-08-13 04:31:37.808090 I | etcdserver/api: enabled capabilities for version 3.3
	2021-08-13 04:31:37.966692 W | etcdserver: request "ID:3238505112327492611 Method:\"PUT\" Path:\"/0/version\" Val:\"3.3.0\" " with result "" took too long (114.896531ms) to execute
	proto: no coders for int
	proto: no encoder for ValueSize int [GetProperties]
	2021-08-13 04:31:50.570496 W | etcdserver: request "header:<ID:3238505112327493035 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterrolebindings/system:controller:replication-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterrolebindings/system:controller:replication-controller\" value_size:408 >> failure:<>>" with result "size:14" took too long (130.855391ms) to execute
	2021-08-13 04:31:50.570819 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:4" took too long (100.941765ms) to execute
	2021-08-13 04:31:51.147041 W | etcdserver: read-only range request "key:\"/registry/roles/kube-system/system:controller:cloud-provider\" " with result "range_response_count:0 size:5" took too long (141.699039ms) to execute
	2021-08-13 04:31:51.463486 W | etcdserver: read-only range request "key:\"/registry/rolebindings/kube-system/system::leader-locking-kube-controller-manager\" " with result "range_response_count:0 size:5" took too long (128.435378ms) to execute
	2021-08-13 04:31:53.025745 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/resourcequota-controller\" " with result "range_response_count:0 size:5" took too long (117.896656ms) to execute
	2021-08-13 04:31:53.229545 W | etcdserver: request "header:<ID:3238505112327493185 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/secrets/kube-system/resourcequota-controller-token-sr8xz\" mod_revision:0 > success:<request_put:<key:\"/registry/secrets/kube-system/resourcequota-controller-token-sr8xz\" value_size:2397 >> failure:<>>" with result "size:16" took too long (100.486512ms) to execute
	2021-08-13 04:31:53.233342 W | etcdserver: read-only range request "key:\"/registry/events/default/old-k8s-version-20210813043048-2022292.169ac36e083ebb7a\" " with result "range_response_count:1 size:551" took too long (106.833567ms) to execute
	2021-08-13 04:31:53.413984 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (101.518212ms) to execute
	2021-08-13 04:31:53.726516 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/statefulset-controller\" " with result "range_response_count:0 size:5" took too long (151.15078ms) to execute
	2021-08-13 04:31:55.366271 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/attachdetach-controller\" " with result "range_response_count:1 size:214" took too long (173.20919ms) to execute
	2021-08-13 04:31:55.599231 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:178" took too long (195.261657ms) to execute
	2021-08-13 04:31:55.831787 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/pv-protection-controller\" " with result "range_response_count:1 size:216" took too long (152.476608ms) to execute
	2021-08-13 04:31:55.975555 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/persistent-volume-binder\" " with result "range_response_count:0 size:5" took too long (131.820629ms) to execute
	2021-08-13 04:32:09.090464 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-node-lease/default\" " with result "range_response_count:1 size:189" took too long (121.150738ms) to execute
	2021-08-13 04:32:09.090620 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:1 size:173" took too long (117.948054ms) to execute
	2021-08-13 04:32:09.538758 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-fb8b8dccf-6jxrq\" " with result "range_response_count:1 size:1440" took too long (231.490649ms) to execute
	2021-08-13 04:32:09.538809 W | etcdserver: request "header:<ID:3238505112327493522 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/coredns.169ac376942362f9\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns.169ac376942362f9\" value_size:345 lease:3238505112327493143 >> failure:<>>" with result "size:16" took too long (121.564805ms) to execute
	2021-08-13 04:32:09.749317 W | etcdserver: request "header:<ID:3238505112327493531 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/coredns-fb8b8dccf-6jxrq\" mod_revision:360 > success:<request_delete_range:<key:\"/registry/pods/kube-system/coredns-fb8b8dccf-6jxrq\" > > failure:<request_range:<key:\"/registry/pods/kube-system/coredns-fb8b8dccf-6jxrq\" > >>" with result "size:18" took too long (100.955034ms) to execute
	2021-08-13 04:32:10.221490 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/coredns\" " with result "range_response_count:1 size:458" took too long (117.968379ms) to execute
	2021-08-13 04:32:10.221663 W | etcdserver: read-only range request "key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" " with result "range_response_count:0 size:5" took too long (142.571078ms) to execute
	2021-08-13 04:32:11.557373 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/storage-provisioner\" " with result "range_response_count:0 size:5" took too long (116.619745ms) to execute
	
	* 
	* ==> kernel <==
	*  04:40:12 up 14:22,  0 users,  load average: 2.25, 2.01, 2.05
	Linux old-k8s-version-20210813043048-2022292 5.8.0-1041-aws #43~20.04.1-Ubuntu SMP Thu Jul 15 11:03:27 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [d341dc7c44990fd954f53b0a062faebb1aaab9aec1922164f28c6a8b52387f03] <==
	* I0813 04:39:59.153522       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:40:00.153636       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:40:00.153748       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:40:01.153867       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:40:01.153973       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:40:02.154099       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:40:02.154366       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:40:03.154539       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:40:03.154837       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:40:04.154962       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:40:04.155106       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:40:05.159911       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:40:05.160610       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:40:06.160733       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:40:06.160922       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:40:08.459007       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:40:08.459136       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:40:09.459275       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:40:09.459411       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:40:10.464434       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:40:10.464574       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:40:11.464693       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:40:11.464870       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:40:12.468447       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:40:12.468560       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	
	* 
	* ==> kube-apiserver [fa0f286b4358e9cb1e23be3a2c4603853d2e15fe2cb82a1287187209fb40ff26] <==
	* I0813 04:33:10.996657       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:33:10.996789       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:33:11.996898       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:33:11.996991       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:33:12.997098       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:33:12.997223       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:33:13.997336       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:33:13.997456       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:33:14.997563       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:33:14.997788       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:33:15.997879       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:33:15.998013       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:33:16.998134       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:33:16.998337       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:33:17.998420       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:33:17.998549       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:33:18.998656       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:33:18.998770       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:33:19.998879       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:33:19.999042       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:33:20.999148       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:33:20.999355       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:33:21.999472       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:33:21.999736       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:33:23.002604       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	
	* 
	* ==> kube-controller-manager [2cc7d2beeb7ce652e2b5700da3ab1c5f73bc77e9ede2226da137650b001baf2c] <==
	* E0813 04:35:18.978277       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 04:35:18.978303       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"dd0d4c9e-fbef-11eb-96a3-0242c0a83a02", APIVersion:"apps/v1", ResourceVersion:"695", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 04:35:18.986425       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 04:35:18.986419       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"dd0d4c9e-fbef-11eb-96a3-0242c0a83a02", APIVersion:"apps/v1", ResourceVersion:"695", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 04:35:19.021115       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"dd0a0f5e-fbef-11eb-96a3-0242c0a83a02", APIVersion:"apps/v1", ResourceVersion:"692", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-5b494cc544-n6ssn
	I0813 04:35:19.035789       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"dd0d4c9e-fbef-11eb-96a3-0242c0a83a02", APIVersion:"apps/v1", ResourceVersion:"695", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-5d8978d65d-xmnmv
	E0813 04:35:41.645644       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 04:35:44.227588       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0813 04:36:11.896799       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 04:36:16.228655       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0813 04:36:42.147884       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 04:36:48.229847       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0813 04:37:12.398888       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 04:37:20.231680       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0813 04:37:42.649947       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 04:37:52.232963       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0813 04:38:12.901093       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 04:38:24.233994       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0813 04:38:43.152367       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 04:38:56.235146       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0813 04:39:13.404389       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 04:39:28.236401       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0813 04:39:43.655869       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 04:40:00.237782       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	I0813 04:40:09.650288       1 node_lifecycle_controller.go:1009] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	
	* 
	* ==> kube-proxy [2cab4a5a3763eed528d32e62fb4162c211413444ef90a588ae2383caa8746b65] <==
	* W0813 04:34:24.291897       1 server_others.go:295] Flag proxy-mode="" unknown, assuming iptables proxy
	I0813 04:34:24.312453       1 server_others.go:148] Using iptables Proxier.
	I0813 04:34:24.312622       1 server_others.go:178] Tearing down inactive rules.
	I0813 04:34:24.744786       1 server.go:555] Version: v1.14.0
	I0813 04:34:24.784436       1 config.go:202] Starting service config controller
	I0813 04:34:24.784452       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
	I0813 04:34:24.784602       1 config.go:102] Starting endpoints config controller
	I0813 04:34:24.784607       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
	I0813 04:34:24.884643       1 controller_utils.go:1034] Caches are synced for service config controller
	I0813 04:34:24.884825       1 controller_utils.go:1034] Caches are synced for endpoints config controller
	
	* 
	* ==> kube-proxy [f97ff7f944d495d64caffa30c2356c092fd186015004d469c02f63268e544e07] <==
	* W0813 04:32:16.041359       1 server_others.go:295] Flag proxy-mode="" unknown, assuming iptables proxy
	I0813 04:32:16.052644       1 server_others.go:148] Using iptables Proxier.
	I0813 04:32:16.052851       1 server_others.go:178] Tearing down inactive rules.
	I0813 04:32:17.340779       1 server.go:555] Version: v1.14.0
	I0813 04:32:17.360896       1 config.go:102] Starting endpoints config controller
	I0813 04:32:17.360954       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
	I0813 04:32:17.361002       1 config.go:202] Starting service config controller
	I0813 04:32:17.361053       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
	I0813 04:32:17.461077       1 controller_utils.go:1034] Caches are synced for endpoints config controller
	I0813 04:32:17.461184       1 controller_utils.go:1034] Caches are synced for service config controller
	
	* 
	* ==> kube-scheduler [5f4fe9ac4b5eccc0e25925f5407f0baa4521cdec2a18fca6b79b07c606e35e6b] <==
	* I0813 04:34:10.975461       1 serving.go:319] Generated self-signed cert in-memory
	W0813 04:34:14.395349       1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
	W0813 04:34:14.395368       1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
	W0813 04:34:14.395378       1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
	I0813 04:34:14.397958       1 server.go:142] Version: v1.14.0
	I0813 04:34:14.402192       1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
	W0813 04:34:14.404822       1 authorization.go:47] Authorization is disabled
	W0813 04:34:14.404836       1 authentication.go:55] Authentication is disabled
	I0813 04:34:14.404847       1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251
	I0813 04:34:14.405293       1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
	E0813 04:34:22.463902       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E0813 04:34:22.464242       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E0813 04:34:22.464956       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	E0813 04:34:22.465253       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	I0813 04:34:23.919558       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
	I0813 04:34:24.020407       1 controller_utils.go:1034] Caches are synced for scheduler controller
	
	* 
	* ==> kube-scheduler [ae7249685fc33d11e70d8b4c8dc90a3b3f608ae9e4bb557a1f3744b580e24317] <==
	* W0813 04:31:41.118844       1 authentication.go:55] Authentication is disabled
	I0813 04:31:41.118866       1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251
	I0813 04:31:41.119322       1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
	E0813 04:31:48.328343       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 04:31:48.333317       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 04:31:48.333386       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 04:31:48.333421       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 04:31:48.333463       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 04:31:48.333494       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 04:31:48.333585       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 04:31:48.333663       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 04:31:48.335300       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 04:31:48.347660       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 04:31:49.329335       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 04:31:49.334343       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 04:31:49.335382       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 04:31:49.341191       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 04:31:49.345426       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 04:31:49.352794       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 04:31:49.357193       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 04:31:49.359889       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 04:31:49.361364       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 04:31:49.365411       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0813 04:31:51.221914       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
	I0813 04:31:51.322112       1 controller_utils.go:1034] Caches are synced for scheduler controller
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 04:33:44 UTC, end at Fri 2021-08-13 04:40:13 UTC. --
	Aug 13 04:40:13 old-k8s-version-20210813043048-2022292 kubelet[6248]: I0813 04:40:13.036355    6248 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0x400090d390, READY
	Aug 13 04:40:13 old-k8s-version-20210813043048-2022292 kubelet[6248]: I0813 04:40:13.051749    6248 kuberuntime_manager.go:210] Container runtime containerd initialized, version: 1.4.6, apiVersion: v1alpha2
	Aug 13 04:40:13 old-k8s-version-20210813043048-2022292 kubelet[6248]: I0813 04:40:13.052081    6248 server.go:1037] Started kubelet
	Aug 13 04:40:13 old-k8s-version-20210813043048-2022292 kubelet[6248]: I0813 04:40:13.062660    6248 server.go:141] Starting to listen on 0.0.0.0:10250
	Aug 13 04:40:13 old-k8s-version-20210813043048-2022292 kubelet[6248]: I0813 04:40:13.063346    6248 server.go:343] Adding debug handlers to kubelet server.
	Aug 13 04:40:13 old-k8s-version-20210813043048-2022292 kubelet[6248]: I0813 04:40:13.065806    6248 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
	Aug 13 04:40:13 old-k8s-version-20210813043048-2022292 kubelet[6248]: I0813 04:40:13.065849    6248 status_manager.go:152] Starting to sync pod status with apiserver
	Aug 13 04:40:13 old-k8s-version-20210813043048-2022292 kubelet[6248]: I0813 04:40:13.065861    6248 kubelet.go:1806] Starting kubelet main sync loop.
	Aug 13 04:40:13 old-k8s-version-20210813043048-2022292 kubelet[6248]: I0813 04:40:13.065882    6248 kubelet.go:1823] skipping pod synchronization - [container runtime status check may not have completed yet., PLEG is not healthy: pleg has yet to be successful.]
	Aug 13 04:40:13 old-k8s-version-20210813043048-2022292 kubelet[6248]: I0813 04:40:13.066077    6248 volume_manager.go:248] Starting Kubelet Volume Manager
	Aug 13 04:40:13 old-k8s-version-20210813043048-2022292 kubelet[6248]: I0813 04:40:13.070954    6248 desired_state_of_world_populator.go:130] Desired state populator starts to run
	Aug 13 04:40:13 old-k8s-version-20210813043048-2022292 kubelet[6248]: I0813 04:40:13.079967    6248 clientconn.go:440] parsed scheme: "unix"
	Aug 13 04:40:13 old-k8s-version-20210813043048-2022292 kubelet[6248]: I0813 04:40:13.079988    6248 clientconn.go:440] scheme "unix" not registered, fallback to default scheme
	Aug 13 04:40:13 old-k8s-version-20210813043048-2022292 kubelet[6248]: I0813 04:40:13.080013    6248 asm_arm64.s:1128] ccResolverWrapper: sending new addresses to cc: [{unix:///run/containerd/containerd.sock 0  <nil>}]
	Aug 13 04:40:13 old-k8s-version-20210813043048-2022292 kubelet[6248]: I0813 04:40:13.080022    6248 clientconn.go:796] ClientConn switching balancer to "pick_first"
	Aug 13 04:40:13 old-k8s-version-20210813043048-2022292 kubelet[6248]: I0813 04:40:13.080059    6248 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0x400099f160, CONNECTING
	Aug 13 04:40:13 old-k8s-version-20210813043048-2022292 kubelet[6248]: I0813 04:40:13.080148    6248 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0x400099f160, READY
	Aug 13 04:40:13 old-k8s-version-20210813043048-2022292 kubelet[6248]: I0813 04:40:13.166104    6248 kubelet.go:1823] skipping pod synchronization - container runtime status check may not have completed yet.
	Aug 13 04:40:13 old-k8s-version-20210813043048-2022292 kubelet[6248]: I0813 04:40:13.172914    6248 kuberuntime_manager.go:946] updating runtime config through cri with podcidr 10.244.0.0/24
	Aug 13 04:40:13 old-k8s-version-20210813043048-2022292 kubelet[6248]: I0813 04:40:13.173096    6248 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach
	Aug 13 04:40:13 old-k8s-version-20210813043048-2022292 kubelet[6248]: I0813 04:40:13.173976    6248 kubelet_network.go:77] Setting Pod CIDR:  -> 10.244.0.0/24
	Aug 13 04:40:13 old-k8s-version-20210813043048-2022292 kubelet[6248]: I0813 04:40:13.174121    6248 kubelet_node_status.go:72] Attempting to register node old-k8s-version-20210813043048-2022292
	Aug 13 04:40:13 old-k8s-version-20210813043048-2022292 kubelet[6248]: I0813 04:40:13.311029    6248 kubelet_node_status.go:114] Node old-k8s-version-20210813043048-2022292 was previously registered
	Aug 13 04:40:13 old-k8s-version-20210813043048-2022292 kubelet[6248]: I0813 04:40:13.311215    6248 kubelet_node_status.go:75] Successfully registered node old-k8s-version-20210813043048-2022292
	Aug 13 04:40:13 old-k8s-version-20210813043048-2022292 kubelet[6248]: I0813 04:40:13.367114    6248 kubelet.go:1823] skipping pod synchronization - container runtime status check may not have completed yet.
	
	* 
	* ==> kubernetes-dashboard [e11993011a0ace13a844916905a50865343256deec1aed625969e72928ed06a3] <==
	* 2021/08/13 04:35:19 Using namespace: kubernetes-dashboard
	2021/08/13 04:35:19 Using in-cluster config to connect to apiserver
	2021/08/13 04:35:19 Using secret token for csrf signing
	2021/08/13 04:35:19 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/13 04:35:19 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/13 04:35:19 Successful initial request to the apiserver, version: v1.14.0
	2021/08/13 04:35:19 Generating JWE encryption key
	2021/08/13 04:35:19 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/13 04:35:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/13 04:35:20 Initializing JWE encryption key from synchronized object
	2021/08/13 04:35:20 Creating in-cluster Sidecar client
	2021/08/13 04:35:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 04:35:20 Serving insecurely on HTTP port: 9090
	2021/08/13 04:35:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 04:36:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 04:36:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 04:37:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 04:37:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 04:38:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 04:38:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 04:39:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 04:39:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 04:35:19 Starting overwatch
	
	* 
	* ==> storage-provisioner [3a77288a95c8f00f74ab0ac8c3d250db01f7e14abc6a199fc27a47901cda01f0] <==
	* I0813 04:34:24.283465       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0813 04:34:54.286079       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> storage-provisioner [915c40344b615927958968d753f3538c387ca316332f25ad8005f07afcdf921c] <==
	* I0813 04:35:09.405858       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 04:35:09.420206       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 04:35:09.420254       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 04:35:26.812051       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 04:35:26.812644       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20210813043048-2022292_5268b0aa-4379-4687-a31f-891eff7dac7a!
	I0813 04:35:26.812813       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6d4bd5ea-fbef-11eb-a893-0242577d9e8b", APIVersion:"v1", ResourceVersion:"768", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-20210813043048-2022292_5268b0aa-4379-4687-a31f-891eff7dac7a became leader
	I0813 04:35:26.912753       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20210813043048-2022292_5268b0aa-4379-4687-a31f-891eff7dac7a!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-20210813043048-2022292 -n old-k8s-version-20210813043048-2022292
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-20210813043048-2022292 -n old-k8s-version-20210813043048-2022292: exit status 2 (411.156561ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context old-k8s-version-20210813043048-2022292 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: metrics-server-8546d8b77b-snn6r
helpers_test.go:273: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20210813043048-2022292 describe pod metrics-server-8546d8b77b-snn6r
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context old-k8s-version-20210813043048-2022292 describe pod metrics-server-8546d8b77b-snn6r: exit status 1 (189.070106ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-8546d8b77b-snn6r" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context old-k8s-version-20210813043048-2022292 describe pod metrics-server-8546d8b77b-snn6r: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect old-k8s-version-20210813043048-2022292
helpers_test.go:236: (dbg) docker inspect old-k8s-version-20210813043048-2022292:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "16119f769e1b2aa43ac152ae352326a4eae76d26fe1b877acf0f3b92e2bbf305",
	        "Created": "2021-08-13T04:30:50.696946161Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2156782,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-08-13T04:33:44.239454282Z",
	            "FinishedAt": "2021-08-13T04:33:42.828998558Z"
	        },
	        "Image": "sha256:ba5ae658d5b3f017bdb597cc46a1912d5eed54239e31b777788d204fdcbc4445",
	        "ResolvConfPath": "/var/lib/docker/containers/16119f769e1b2aa43ac152ae352326a4eae76d26fe1b877acf0f3b92e2bbf305/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/16119f769e1b2aa43ac152ae352326a4eae76d26fe1b877acf0f3b92e2bbf305/hostname",
	        "HostsPath": "/var/lib/docker/containers/16119f769e1b2aa43ac152ae352326a4eae76d26fe1b877acf0f3b92e2bbf305/hosts",
	        "LogPath": "/var/lib/docker/containers/16119f769e1b2aa43ac152ae352326a4eae76d26fe1b877acf0f3b92e2bbf305/16119f769e1b2aa43ac152ae352326a4eae76d26fe1b877acf0f3b92e2bbf305-json.log",
	        "Name": "/old-k8s-version-20210813043048-2022292",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-20210813043048-2022292:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20210813043048-2022292",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d1c77a4413065856e02c35ff2475a1c410daceb87303ee3c597287b4ce79c9ce-init/diff:/var/lib/docker/overlay2/7eab3572859d93b266e01c53f7180a9b812a9352d6d9de9a250b7c08853896bd/diff:/var/lib/docker/overlay2/735c75d71cfc18e90e119a4cbda44b5328f80ee140097a56e4b8d56d1d73296a/diff:/var/lib/docker/overlay2/a3e21a33abd0bc635f6c01d5065127b0c6ae8648e27621bc2af8480371e0e000/diff:/var/lib/docker/overlay2/81573b84b43b2908098dbf411f4127aea8745e37aa0ee2f3bcf32f2378aef923/diff:/var/lib/docker/overlay2/633406c91e496c6ee40740050d85641e9c1f2bf787ba64a82f892910362ceeb3/diff:/var/lib/docker/overlay2/deb8d862aaef5e3fc2ec77b3f1839b07c4f6998399f4f111cd38226c004f70b0/diff:/var/lib/docker/overlay2/57b3638e691861d96d431a19402174c1139d2ff0280c08c71a81a8fcf9390e79/diff:/var/lib/docker/overlay2/6e43f99fe3b29b8ef7a4f065a75009878de2e2c2f4298c42eaf887f7602bbc6e/diff:/var/lib/docker/overlay2/cf9d28926b8190588c7af7d8b25156aee75f2abd04071b6e2a0a0fbf2e143dee/diff:/var/lib/docker/overlay2/6aa317
1af6f20f0682732cc4019152e4d5b0846e1ebda0a27c41c772e1cde011/diff:/var/lib/docker/overlay2/868a81f13eb2fedd1a1cb40eaf1c94ba3507a2ce88acff3fbbe9324b52a4b161/diff:/var/lib/docker/overlay2/162214348b4cea5219287565f6d7e0dd459b26bcc50e3db36cf72c667b547528/diff:/var/lib/docker/overlay2/9dbad12bae2f76b71152f7b4515e05d4b998ecec3e6ee896abcec7a80dcd2bea/diff:/var/lib/docker/overlay2/6cabd7857a22f00b0aba07331d6ccd89db9770531c0aa2f6fe5dd0f2cfdf0571/diff:/var/lib/docker/overlay2/d37830ed714a3f12f75bdb0787ab6a0b95fa84f6f2ba7cfce7c0088eae46490b/diff:/var/lib/docker/overlay2/d1f89b0ec8b42bfa6422a1c60a32bf10de45dc549f369f5a7cab728a58edc9f6/diff:/var/lib/docker/overlay2/23f19b760877b914dfe08fbc57f540b6d7a01f94b06b51f27fd6b0307358f0c7/diff:/var/lib/docker/overlay2/a5a77daab231d8d9f6bccde006a207ac55eba70f1221af6acf584668b6732875/diff:/var/lib/docker/overlay2/8d8735d77324b45253a6a19c95ccc69efbb75db0817acd436b005907edf2edcf/diff:/var/lib/docker/overlay2/a7baa651956578e18a5f1b4650eb08a3fde481426f62eca9488d43b89516af4a/diff:/var/lib/d
ocker/overlay2/bce892b3b410ea92f44fedfdc2ee2fa21cfd1fb09da0f3f710f4127436dee1da/diff:/var/lib/docker/overlay2/5fd9b1d93e98bad37f9fb94802b81ef99b54fe312c33006d1efe3e0a4d018218/diff:/var/lib/docker/overlay2/4fa01f36ea63b13ec54182dc384831ff6ba4af27e4e0af13a679984676a4444c/diff:/var/lib/docker/overlay2/63fcd873b6d3120225858a1625cd3b62111df43d3ee0a5fc67083b6912d73a0b/diff:/var/lib/docker/overlay2/2a89e5c9c4b59c0940b10344a4b9bcc69aa162cbdaff6b115404618622a39bf7/diff:/var/lib/docker/overlay2/f08c2886bdfdaf347184cfc06f22457c321676b0bed884791f82f2e3871b640d/diff:/var/lib/docker/overlay2/2f28445803213dc1a6a1b2c687d83ad65dbc018184c663d1f55aa1e8ba26c71c/diff:/var/lib/docker/overlay2/b380dc70af7cf929aaac54e718efbf169fc3994906ab4c15442ddcb1b9973044/diff:/var/lib/docker/overlay2/78fc6ffaa10b2fbce9cefb40ac36aad6ac1d9d90eb27a39dc3316a9c7925b6e9/diff:/var/lib/docker/overlay2/14ee7ddeeb1d52f6956390ca75ff1c67feb8f463a7590e4e021a61251ed42ace/diff:/var/lib/docker/overlay2/99b8cd45c95f310665f0002ff1e8a6932c40fe872e3daa332d0b6f0cc41
f09f7/diff:/var/lib/docker/overlay2/efc742edfe683b14be0e72910049a54bf7b14ac798aa52a5e0f2839e1192b382/diff:/var/lib/docker/overlay2/d038d2ed6aff52af29d17eeb4de8728511045dbe49430059212877f1ae82f24b/diff:/var/lib/docker/overlay2/413fdf0e0da33dff95cacfd58fb4d7eb00b56c1777905c5671426293e1236f21/diff:/var/lib/docker/overlay2/88c5007e3d3e219079cebf81af5c22026c5923305801eacb5affe25b84906e7f/diff:/var/lib/docker/overlay2/e989119af87381d107830638584e78f0bf616a31754948372e177ffcdfb821fb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d1c77a4413065856e02c35ff2475a1c410daceb87303ee3c597287b4ce79c9ce/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d1c77a4413065856e02c35ff2475a1c410daceb87303ee3c597287b4ce79c9ce/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d1c77a4413065856e02c35ff2475a1c410daceb87303ee3c597287b4ce79c9ce/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20210813043048-2022292",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20210813043048-2022292/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20210813043048-2022292",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20210813043048-2022292",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20210813043048-2022292",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "32fb91282f37640e435ab3a6c90145af713320496634fa170e71c73f3eb1adfb",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50961"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50960"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50957"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50959"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50958"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/32fb91282f37",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20210813043048-2022292": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "16119f769e1b",
	                        "old-k8s-version-20210813043048-2022292"
	                    ],
	                    "NetworkID": "aefa95d69f7e8d3baaf5a16d71f467c48c007729ab8d8d3c94f20112d49ed093",
	                    "EndpointID": "0d7250e0b50a3a263ace567820e57db3ffea8a8b8e4f001dd6da8735cf754b97",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-20210813043048-2022292 -n old-k8s-version-20210813043048-2022292
helpers_test.go:245: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-20210813043048-2022292 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-20210813043048-2022292 logs -n 25: (5.042150077s)
helpers_test.go:253: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                  Profile                  |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|-------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                                | force-systemd-flag-20210813042828-2022292 | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:30:02 UTC | Fri, 13 Aug 2021 04:30:09 UTC |
	|         | force-systemd-flag-20210813042828-2022292         |                                           |         |         |                               |                               |
	| start   | -p                                                | kubernetes-upgrade-20210813042631-2022292 | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:29:30 UTC | Fri, 13 Aug 2021 04:30:45 UTC |
	|         | kubernetes-upgrade-20210813042631-2022292         |                                           |         |         |                               |                               |
	|         | --memory=2200                                     |                                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker            |                                           |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                           |         |         |                               |                               |
	| delete  | -p                                                | kubernetes-upgrade-20210813042631-2022292 | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:30:45 UTC | Fri, 13 Aug 2021 04:30:48 UTC |
	|         | kubernetes-upgrade-20210813042631-2022292         |                                           |         |         |                               |                               |
	| start   | -p                                                | cert-options-20210813043009-2022292       | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:30:09 UTC | Fri, 13 Aug 2021 04:31:30 UTC |
	|         | cert-options-20210813043009-2022292               |                                           |         |         |                               |                               |
	|         | --memory=2048                                     |                                           |         |         |                               |                               |
	|         | --apiserver-ips=127.0.0.1                         |                                           |         |         |                               |                               |
	|         | --apiserver-ips=192.168.15.15                     |                                           |         |         |                               |                               |
	|         | --apiserver-names=localhost                       |                                           |         |         |                               |                               |
	|         | --apiserver-names=www.google.com                  |                                           |         |         |                               |                               |
	|         | --apiserver-port=8555                             |                                           |         |         |                               |                               |
	|         | --driver=docker                                   |                                           |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                           |         |         |                               |                               |
	| -p      | cert-options-20210813043009-2022292               | cert-options-20210813043009-2022292       | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:31:30 UTC | Fri, 13 Aug 2021 04:31:30 UTC |
	|         | ssh openssl x509 -text -noout -in                 |                                           |         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt             |                                           |         |         |                               |                               |
	| delete  | -p                                                | cert-options-20210813043009-2022292       | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:31:31 UTC | Fri, 13 Aug 2021 04:31:33 UTC |
	|         | cert-options-20210813043009-2022292               |                                           |         |         |                               |                               |
	| start   | -p                                                | no-preload-20210813043133-2022292         | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:31:33 UTC | Fri, 13 Aug 2021 04:33:08 UTC |
	|         | no-preload-20210813043133-2022292                 |                                           |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                           |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                           |         |         |                               |                               |
	|         | --driver=docker                                   |                                           |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                           |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210813043048-2022292    | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:30:49 UTC | Fri, 13 Aug 2021 04:33:13 UTC |
	|         | old-k8s-version-20210813043048-2022292            |                                           |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                           |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                           |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                           |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                           |         |         |                               |                               |
	|         | --keep-context=false --driver=docker              |                                           |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                           |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20210813043133-2022292         | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:33:17 UTC | Fri, 13 Aug 2021 04:33:17 UTC |
	|         | no-preload-20210813043133-2022292                 |                                           |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                           |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                           |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | old-k8s-version-20210813043048-2022292    | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:33:22 UTC | Fri, 13 Aug 2021 04:33:22 UTC |
	|         | old-k8s-version-20210813043048-2022292            |                                           |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                           |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                           |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20210813043133-2022292         | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:33:18 UTC | Fri, 13 Aug 2021 04:33:38 UTC |
	|         | no-preload-20210813043133-2022292                 |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                           |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20210813043133-2022292         | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:33:38 UTC | Fri, 13 Aug 2021 04:33:38 UTC |
	|         | no-preload-20210813043133-2022292                 |                                           |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                           |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20210813043048-2022292    | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:33:22 UTC | Fri, 13 Aug 2021 04:33:43 UTC |
	|         | old-k8s-version-20210813043048-2022292            |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                           |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210813043048-2022292    | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:33:43 UTC | Fri, 13 Aug 2021 04:33:43 UTC |
	|         | old-k8s-version-20210813043048-2022292            |                                           |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                           |         |         |                               |                               |
	| start   | -p                                                | no-preload-20210813043133-2022292         | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:33:38 UTC | Fri, 13 Aug 2021 04:39:46 UTC |
	|         | no-preload-20210813043133-2022292                 |                                           |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                           |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                           |         |         |                               |                               |
	|         | --driver=docker                                   |                                           |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                           |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210813043048-2022292    | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:33:43 UTC | Fri, 13 Aug 2021 04:39:55 UTC |
	|         | old-k8s-version-20210813043048-2022292            |                                           |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                           |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                           |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                           |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                           |         |         |                               |                               |
	|         | --keep-context=false --driver=docker              |                                           |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                           |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                           |         |         |                               |                               |
	| ssh     | -p                                                | no-preload-20210813043133-2022292         | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:39:57 UTC | Fri, 13 Aug 2021 04:39:57 UTC |
	|         | no-preload-20210813043133-2022292                 |                                           |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                           |         |         |                               |                               |
	| pause   | -p                                                | no-preload-20210813043133-2022292         | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:39:57 UTC | Fri, 13 Aug 2021 04:39:57 UTC |
	|         | no-preload-20210813043133-2022292                 |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=1                            |                                           |         |         |                               |                               |
	| unpause | -p                                                | no-preload-20210813043133-2022292         | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:39:58 UTC | Fri, 13 Aug 2021 04:39:59 UTC |
	|         | no-preload-20210813043133-2022292                 |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=1                            |                                           |         |         |                               |                               |
	| delete  | -p                                                | no-preload-20210813043133-2022292         | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:39:59 UTC | Fri, 13 Aug 2021 04:40:02 UTC |
	|         | no-preload-20210813043133-2022292                 |                                           |         |         |                               |                               |
	| delete  | -p                                                | no-preload-20210813043133-2022292         | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:40:02 UTC | Fri, 13 Aug 2021 04:40:03 UTC |
	|         | no-preload-20210813043133-2022292                 |                                           |         |         |                               |                               |
	| ssh     | -p                                                | old-k8s-version-20210813043048-2022292    | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:40:05 UTC | Fri, 13 Aug 2021 04:40:06 UTC |
	|         | old-k8s-version-20210813043048-2022292            |                                           |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                           |         |         |                               |                               |
	| pause   | -p                                                | old-k8s-version-20210813043048-2022292    | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:40:06 UTC | Fri, 13 Aug 2021 04:40:07 UTC |
	|         | old-k8s-version-20210813043048-2022292            |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=1                            |                                           |         |         |                               |                               |
	| unpause | -p                                                | old-k8s-version-20210813043048-2022292    | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:40:07 UTC | Fri, 13 Aug 2021 04:40:08 UTC |
	|         | old-k8s-version-20210813043048-2022292            |                                           |         |         |                               |                               |
	|         | --alsologtostderr -v=1                            |                                           |         |         |                               |                               |
	| -p      | old-k8s-version-20210813043048-2022292            | old-k8s-version-20210813043048-2022292    | jenkins | v1.22.0 | Fri, 13 Aug 2021 04:40:10 UTC | Fri, 13 Aug 2021 04:40:13 UTC |
	|         | logs -n 25                                        |                                           |         |         |                               |                               |
	|---------|---------------------------------------------------|-------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 04:40:03
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.16.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 04:40:03.219073 2168352 out.go:298] Setting OutFile to fd 1 ...
	I0813 04:40:03.219154 2168352 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 04:40:03.219158 2168352 out.go:311] Setting ErrFile to fd 2...
	I0813 04:40:03.219162 2168352 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 04:40:03.219291 2168352 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0813 04:40:03.219567 2168352 out.go:305] Setting JSON to false
	I0813 04:40:03.220640 2168352 start.go:111] hostinfo: {"hostname":"ip-172-31-30-239","uptime":51747,"bootTime":1628777856,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0813 04:40:03.220730 2168352 start.go:121] virtualization:  
	I0813 04:40:03.223402 2168352 out.go:177] * [embed-certs-20210813044003-2022292] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0813 04:40:03.225587 2168352 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 04:40:03.224409 2168352 notify.go:169] Checking for updates...
	I0813 04:40:03.230119 2168352 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 04:40:03.236528 2168352 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0813 04:40:03.238537 2168352 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0813 04:40:03.239095 2168352 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 04:40:03.289945 2168352 docker.go:132] docker version: linux-20.10.8
	I0813 04:40:03.290055 2168352 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 04:40:03.404596 2168352 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:19 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-13 04:40:03.335246776 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0813 04:40:03.404706 2168352 docker.go:244] overlay module found
	I0813 04:40:03.407368 2168352 out.go:177] * Using the docker driver based on user configuration
	I0813 04:40:03.407388 2168352 start.go:278] selected driver: docker
	I0813 04:40:03.407394 2168352 start.go:751] validating driver "docker" against <nil>
	I0813 04:40:03.407408 2168352 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 04:40:03.407459 2168352 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 04:40:03.407532 2168352 out.go:242] ! Your cgroup does not allow setting memory.
	I0813 04:40:03.409336 2168352 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 04:40:03.409639 2168352 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 04:40:03.492229 2168352 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:19 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-13 04:40:03.437117915 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0813 04:40:03.492363 2168352 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 04:40:03.492534 2168352 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 04:40:03.492555 2168352 cni.go:93] Creating CNI manager for ""
	I0813 04:40:03.492562 2168352 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 04:40:03.492573 2168352 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0813 04:40:03.492584 2168352 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0813 04:40:03.492589 2168352 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0813 04:40:03.492594 2168352 start_flags.go:277] config:
	{Name:embed-certs-20210813044003-2022292 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210813044003-2022292 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:contain
erd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 04:40:03.495899 2168352 out.go:177] * Starting control plane node embed-certs-20210813044003-2022292 in cluster embed-certs-20210813044003-2022292
	I0813 04:40:03.495932 2168352 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0813 04:40:03.497592 2168352 out.go:177] * Pulling base image ...
	I0813 04:40:03.497612 2168352 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 04:40:03.497639 2168352 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4
	I0813 04:40:03.497660 2168352 cache.go:56] Caching tarball of preloaded images
	I0813 04:40:03.497803 2168352 preload.go:173] Found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0813 04:40:03.497825 2168352 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0813 04:40:03.497921 2168352 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/embed-certs-20210813044003-2022292/config.json ...
	I0813 04:40:03.497945 2168352 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/embed-certs-20210813044003-2022292/config.json: {Name:mk2d80c480fb3c99468a410a03e79e972723594b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 04:40:03.498092 2168352 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0813 04:40:03.532033 2168352 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0813 04:40:03.532062 2168352 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0813 04:40:03.532083 2168352 cache.go:205] Successfully downloaded all kic artifacts
	I0813 04:40:03.532113 2168352 start.go:313] acquiring machines lock for embed-certs-20210813044003-2022292: {Name:mk1beb8b5d17dc1771955505ec31b4ed70ab6178 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 04:40:03.532642 2168352 start.go:317] acquired machines lock for "embed-certs-20210813044003-2022292" in 509.271µs
	I0813 04:40:03.532672 2168352 start.go:89] Provisioning new machine with config: &{Name:embed-certs-20210813044003-2022292 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210813044003-2022292 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 04:40:03.532748 2168352 start.go:126] createHost starting for "" (driver="docker")
	I0813 04:40:03.535863 2168352 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0813 04:40:03.536086 2168352 start.go:160] libmachine.API.Create for "embed-certs-20210813044003-2022292" (driver="docker")
	I0813 04:40:03.536115 2168352 client.go:168] LocalClient.Create starting
	I0813 04:40:03.536168 2168352 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem
	I0813 04:40:03.536194 2168352 main.go:130] libmachine: Decoding PEM data...
	I0813 04:40:03.536219 2168352 main.go:130] libmachine: Parsing certificate...
	I0813 04:40:03.536345 2168352 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem
	I0813 04:40:03.536368 2168352 main.go:130] libmachine: Decoding PEM data...
	I0813 04:40:03.536383 2168352 main.go:130] libmachine: Parsing certificate...
	I0813 04:40:03.536740 2168352 cli_runner.go:115] Run: docker network inspect embed-certs-20210813044003-2022292 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0813 04:40:03.565509 2168352 cli_runner.go:162] docker network inspect embed-certs-20210813044003-2022292 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0813 04:40:03.565579 2168352 network_create.go:255] running [docker network inspect embed-certs-20210813044003-2022292] to gather additional debugging logs...
	I0813 04:40:03.565593 2168352 cli_runner.go:115] Run: docker network inspect embed-certs-20210813044003-2022292
	W0813 04:40:03.597241 2168352 cli_runner.go:162] docker network inspect embed-certs-20210813044003-2022292 returned with exit code 1
	I0813 04:40:03.597265 2168352 network_create.go:258] error running [docker network inspect embed-certs-20210813044003-2022292]: docker network inspect embed-certs-20210813044003-2022292: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20210813044003-2022292
	I0813 04:40:03.597287 2168352 network_create.go:260] output of [docker network inspect embed-certs-20210813044003-2022292]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20210813044003-2022292
	
	** /stderr **
	I0813 04:40:03.597346 2168352 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 04:40:03.631553 2168352 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0x4000bee598] misses:0}
	I0813 04:40:03.631598 2168352 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 04:40:03.631618 2168352 network_create.go:106] attempt to create docker network embed-certs-20210813044003-2022292 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0813 04:40:03.631665 2168352 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20210813044003-2022292
	I0813 04:40:03.703263 2168352 network_create.go:90] docker network embed-certs-20210813044003-2022292 192.168.49.0/24 created
	I0813 04:40:03.703289 2168352 kic.go:106] calculated static IP "192.168.49.2" for the "embed-certs-20210813044003-2022292" container
	I0813 04:40:03.703359 2168352 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0813 04:40:03.732790 2168352 cli_runner.go:115] Run: docker volume create embed-certs-20210813044003-2022292 --label name.minikube.sigs.k8s.io=embed-certs-20210813044003-2022292 --label created_by.minikube.sigs.k8s.io=true
	I0813 04:40:03.762318 2168352 oci.go:102] Successfully created a docker volume embed-certs-20210813044003-2022292
	I0813 04:40:03.762394 2168352 cli_runner.go:115] Run: docker run --rm --name embed-certs-20210813044003-2022292-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20210813044003-2022292 --entrypoint /usr/bin/test -v embed-certs-20210813044003-2022292:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
	I0813 04:40:04.394293 2168352 oci.go:106] Successfully prepared a docker volume embed-certs-20210813044003-2022292
	W0813 04:40:04.394342 2168352 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0813 04:40:04.394350 2168352 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0813 04:40:04.394403 2168352 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0813 04:40:04.394606 2168352 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 04:40:04.394625 2168352 kic.go:179] Starting extracting preloaded images to volume ...
	I0813 04:40:04.394669 2168352 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-20210813044003-2022292:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0813 04:40:04.541808 2168352 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-20210813044003-2022292 --name embed-certs-20210813044003-2022292 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20210813044003-2022292 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-20210813044003-2022292 --network embed-certs-20210813044003-2022292 --ip 192.168.49.2 --volume embed-certs-20210813044003-2022292:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0813 04:40:05.070834 2168352 cli_runner.go:115] Run: docker container inspect embed-certs-20210813044003-2022292 --format={{.State.Running}}
	I0813 04:40:05.123775 2168352 cli_runner.go:115] Run: docker container inspect embed-certs-20210813044003-2022292 --format={{.State.Status}}
	I0813 04:40:05.180752 2168352 cli_runner.go:115] Run: docker exec embed-certs-20210813044003-2022292 stat /var/lib/dpkg/alternatives/iptables
	I0813 04:40:05.273964 2168352 oci.go:278] the created container "embed-certs-20210813044003-2022292" has a running status.
	I0813 04:40:05.273993 2168352 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/embed-certs-20210813044003-2022292/id_rsa...
	I0813 04:40:05.943409 2168352 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/embed-certs-20210813044003-2022292/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0813 04:40:06.125660 2168352 cli_runner.go:115] Run: docker container inspect embed-certs-20210813044003-2022292 --format={{.State.Status}}
	I0813 04:40:06.202340 2168352 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0813 04:40:06.202354 2168352 kic_runner.go:115] Args: [docker exec --privileged embed-certs-20210813044003-2022292 chown docker:docker /home/docker/.ssh/authorized_keys]
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID
	02ccfef662e0d       523cad1a4df73       About a minute ago   Exited              dashboard-metrics-scraper   5                   960542ed32a3c
	e11993011a0ac       85e6c0cff043f       4 minutes ago        Running             kubernetes-dashboard        0                   ae04aff957f34
	915c40344b615       ba04bb24b9575       5 minutes ago        Running             storage-provisioner         2                   942d12e08bf17
	7a9bd6f5dfdb6       7e8edeee9a1e7       5 minutes ago        Running             coredns                     0                   c9acbb8f87b14
	d85e3151586c4       1611cd07b61d5       5 minutes ago        Running             busybox                     1                   bf2f61e77e5ab
	2cab4a5a3763e       239d456d2eb64       5 minutes ago        Running             kube-proxy                  1                   6e0ea29b0ef16
	3a77288a95c8f       ba04bb24b9575       5 minutes ago        Exited              storage-provisioner         1                   942d12e08bf17
	628392b45a06c       f37b7c809e5dc       5 minutes ago        Running             kindnet-cni                 1                   a3176be0a5d01
	9a2b26f090b88       ad99d3ead043f       6 minutes ago        Running             etcd                        1                   cd5400f12a4f7
	5f4fe9ac4b5ec       c303a8bf065e7       6 minutes ago        Running             kube-scheduler              1                   de11ff4464f65
	2cc7d2beeb7ce       1c225a51d1163       6 minutes ago        Running             kube-controller-manager     0                   6b283da941f27
	d341dc7c44990       61c4f4cdad81d       6 minutes ago        Running             kube-apiserver              1                   146b3f751bede
	dfbbc0e066836       1611cd07b61d5       7 minutes ago        Exited              busybox                     0                   87c06e7639f49
	f97ff7f944d49       239d456d2eb64       8 minutes ago        Exited              kube-proxy                  0                   df02a319054c5
	b92bdf938c94f       f37b7c809e5dc       8 minutes ago        Exited              kindnet-cni                 0                   011ae768e4d31
	ae7249685fc33       c303a8bf065e7       8 minutes ago        Exited              kube-scheduler              0                   dd718da5c6566
	ede934da6daa0       ad99d3ead043f       8 minutes ago        Exited              etcd                        0                   6ec38dc64d06a
	fa0f286b4358e       61c4f4cdad81d       8 minutes ago        Exited              kube-apiserver              0                   57141bbb26626
	
	* 
	* ==> containerd <==
	* -- Logs begin at Fri 2021-08-13 04:33:44 UTC, end at Fri 2021-08-13 04:40:16 UTC. --
	Aug 13 04:39:33 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:39:33.237349201Z" level=info msg="ExecSync for \"9a2b26f090b88c5b99180a1fdc55c6247d8641f719be70bb66886f837b95636d\" with command [/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/var/lib/minikube/certs/etcd/ca.crt --cert=/var/lib/minikube/certs/etcd/healthcheck-client.crt --key=/var/lib/minikube/certs/etcd/healthcheck-client.key get foo] and timeout 15 (s)"
	Aug 13 04:39:33 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:39:33.471804743Z" level=info msg="Finish piping \"stderr\" of container exec \"9b628023d87b0936b51aebd5c15356f1a73becf0c6c589e6048fb43d025464f2\""
	Aug 13 04:39:33 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:39:33.471930625Z" level=info msg="Finish piping \"stdout\" of container exec \"9b628023d87b0936b51aebd5c15356f1a73becf0c6c589e6048fb43d025464f2\""
	Aug 13 04:39:33 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:39:33.472275425Z" level=info msg="Exec process \"9b628023d87b0936b51aebd5c15356f1a73becf0c6c589e6048fb43d025464f2\" exits with exit code 0 and error <nil>"
	Aug 13 04:39:33 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:39:33.473673065Z" level=info msg="ExecSync for \"9a2b26f090b88c5b99180a1fdc55c6247d8641f719be70bb66886f837b95636d\" returns with exit code 0"
	Aug 13 04:39:43 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:39:43.229960693Z" level=info msg="ExecSync for \"9a2b26f090b88c5b99180a1fdc55c6247d8641f719be70bb66886f837b95636d\" with command [/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/var/lib/minikube/certs/etcd/ca.crt --cert=/var/lib/minikube/certs/etcd/healthcheck-client.crt --key=/var/lib/minikube/certs/etcd/healthcheck-client.key get foo] and timeout 15 (s)"
	Aug 13 04:39:43 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:39:43.458590790Z" level=info msg="Finish piping \"stderr\" of container exec \"9868f3dab02bf16da83776d55888e6d4f7346cd1f64540abb17a34f546d61e50\""
	Aug 13 04:39:43 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:39:43.458848035Z" level=info msg="Finish piping \"stdout\" of container exec \"9868f3dab02bf16da83776d55888e6d4f7346cd1f64540abb17a34f546d61e50\""
	Aug 13 04:39:43 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:39:43.459392448Z" level=info msg="Exec process \"9868f3dab02bf16da83776d55888e6d4f7346cd1f64540abb17a34f546d61e50\" exits with exit code 0 and error <nil>"
	Aug 13 04:39:43 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:39:43.460593438Z" level=info msg="ExecSync for \"9a2b26f090b88c5b99180a1fdc55c6247d8641f719be70bb66886f837b95636d\" returns with exit code 0"
	Aug 13 04:39:53 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:39:53.229655239Z" level=info msg="ExecSync for \"9a2b26f090b88c5b99180a1fdc55c6247d8641f719be70bb66886f837b95636d\" with command [/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/var/lib/minikube/certs/etcd/ca.crt --cert=/var/lib/minikube/certs/etcd/healthcheck-client.crt --key=/var/lib/minikube/certs/etcd/healthcheck-client.key get foo] and timeout 15 (s)"
	Aug 13 04:39:53 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:39:53.360415656Z" level=info msg="Finish piping \"stderr\" of container exec \"ddf858d23c5889f40da18393e480db0f8d32b83b67608e165f9a21b224a1a338\""
	Aug 13 04:39:53 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:39:53.360598177Z" level=info msg="Finish piping \"stdout\" of container exec \"ddf858d23c5889f40da18393e480db0f8d32b83b67608e165f9a21b224a1a338\""
	Aug 13 04:39:53 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:39:53.360712728Z" level=info msg="Exec process \"ddf858d23c5889f40da18393e480db0f8d32b83b67608e165f9a21b224a1a338\" exits with exit code 0 and error <nil>"
	Aug 13 04:39:53 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:39:53.362021679Z" level=info msg="ExecSync for \"9a2b26f090b88c5b99180a1fdc55c6247d8641f719be70bb66886f837b95636d\" returns with exit code 0"
	Aug 13 04:40:03 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:40:03.229687443Z" level=info msg="ExecSync for \"9a2b26f090b88c5b99180a1fdc55c6247d8641f719be70bb66886f837b95636d\" with command [/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/var/lib/minikube/certs/etcd/ca.crt --cert=/var/lib/minikube/certs/etcd/healthcheck-client.crt --key=/var/lib/minikube/certs/etcd/healthcheck-client.key get foo] and timeout 15 (s)"
	Aug 13 04:40:03 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:40:03.395947504Z" level=info msg="Finish piping \"stderr\" of container exec \"42a293dd41ee4b515071f8e6691589979aaa47962287eac562ea022b80bc42bd\""
	Aug 13 04:40:03 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:40:03.396236116Z" level=info msg="Finish piping \"stdout\" of container exec \"42a293dd41ee4b515071f8e6691589979aaa47962287eac562ea022b80bc42bd\""
	Aug 13 04:40:03 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:40:03.396641126Z" level=info msg="Exec process \"42a293dd41ee4b515071f8e6691589979aaa47962287eac562ea022b80bc42bd\" exits with exit code 0 and error <nil>"
	Aug 13 04:40:03 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:40:03.397909027Z" level=info msg="ExecSync for \"9a2b26f090b88c5b99180a1fdc55c6247d8641f719be70bb66886f837b95636d\" returns with exit code 0"
	Aug 13 04:40:08 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:40:08.869062941Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Aug 13 04:40:10 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:40:10.364791490Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Aug 13 04:40:11 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:40:11.860508322Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Aug 13 04:40:13 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:40:13.173784244Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Aug 13 04:40:14 old-k8s-version-20210813043048-2022292 containerd[342]: time="2021-08-13T04:40:14.878386507Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	
	* 
	* ==> coredns [7a9bd6f5dfdb6d03edd105f1154ed065355e637e0ff0ce6300dbac8f7c2a65bb] <==
	* .:53
	2021-08-13T04:34:40.307Z [INFO] CoreDNS-1.3.1
	2021-08-13T04:34:40.307Z [INFO] linux/arm64, go1.11.4, 6b56a9c
	CoreDNS-1.3.1
	linux/arm64, go1.11.4, 6b56a9c
	2021-08-13T04:34:40.307Z [INFO] plugin/reload: Running configuration MD5 = 84554e3bcd896bd44d28b54cbac27490
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20210813043048-2022292
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-20210813043048-2022292
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dc1c3ca26e9449ce488a773126b8450402c94a19
	                    minikube.k8s.io/name=old-k8s-version-20210813043048-2022292
	                    minikube.k8s.io/updated_at=2021_08_13T04_31_56_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Aug 2021 04:31:48 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Aug 2021 04:40:15 +0000   Fri, 13 Aug 2021 04:31:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Aug 2021 04:40:15 +0000   Fri, 13 Aug 2021 04:31:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Aug 2021 04:40:15 +0000   Fri, 13 Aug 2021 04:31:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 13 Aug 2021 04:40:15 +0000   Fri, 13 Aug 2021 04:40:08 +0000   KubeletNotReady              container runtime status check may not have completed yet.
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    old-k8s-version-20210813043048-2022292
	Capacity:
	 cpu:                2
	 ephemeral-storage:  40474572Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 hugepages-32Mi:     0
	 hugepages-64Ki:     0
	 memory:             8033460Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  40474572Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 hugepages-32Mi:     0
	 hugepages-64Ki:     0
	 memory:             8033460Ki
	 pods:               110
	System Info:
	 Machine ID:                 80c525a0c99c4bf099c0cbf9c365b032
	 System UUID:                d621cc27-9f13-443a-bda2-3bdb78571685
	 Boot ID:                    0b91f2d0-31de-4b03-9973-67e3d0024ffb
	 Kernel Version:             5.8.0-1041-aws
	 OS Image:                   Ubuntu 20.04.2 LTS
	 Operating System:           linux
	 Architecture:               arm64
	 Container Runtime Version:  containerd://1.4.6
	 Kubelet Version:            v1.14.0
	 Kube-Proxy Version:         v1.14.0
	PodCIDR:                     10.244.0.0/24
	Non-terminated Pods:         (12 in total)
	  Namespace                  Name                                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                                              ------------  ----------  ---------------  -------------  ---
	  default                    busybox                                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m3s
	  kube-system                coredns-fb8b8dccf-cfbng                                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     5m37s
	  kube-system                etcd-old-k8s-version-20210813043048-2022292                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m23s
	  kube-system                kindnet-f6spd                                                     100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      8m8s
	  kube-system                kube-apiserver-old-k8s-version-20210813043048-2022292             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m34s
	  kube-system                kube-controller-manager-old-k8s-version-20210813043048-2022292    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m53s
	  kube-system                kube-proxy-9hh9m                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m8s
	  kube-system                kube-scheduler-old-k8s-version-20210813043048-2022292             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m39s
	  kube-system                metrics-server-8546d8b77b-snn6r                                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      300Mi (3%!)(MISSING)       0 (0%!)(MISSING)         5m37s
	  kube-system                storage-provisioner                                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m5s
	  kubernetes-dashboard       dashboard-metrics-scraper-5b494cc544-n6ssn                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m57s
	  kubernetes-dashboard       kubernetes-dashboard-5d8978d65d-xmnmv                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             420Mi (5%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                            Age                    From                                                Message
	  ----     ------                            ----                   ----                                                -------
	  Normal   NodeHasSufficientMemory           8m44s (x8 over 8m44s)  kubelet, old-k8s-version-20210813043048-2022292     Node old-k8s-version-20210813043048-2022292 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure             8m44s (x7 over 8m44s)  kubelet, old-k8s-version-20210813043048-2022292     Node old-k8s-version-20210813043048-2022292 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID              8m44s (x8 over 8m44s)  kubelet, old-k8s-version-20210813043048-2022292     Node old-k8s-version-20210813043048-2022292 status is now: NodeHasSufficientPID
	  Normal   Starting                          7m59s                  kube-proxy, old-k8s-version-20210813043048-2022292  Starting kube-proxy.
	  Normal   Starting                          6m10s                  kubelet, old-k8s-version-20210813043048-2022292     Starting kubelet.
	  Normal   NodeHasSufficientMemory           6m10s (x8 over 6m10s)  kubelet, old-k8s-version-20210813043048-2022292     Node old-k8s-version-20210813043048-2022292 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure             6m10s (x8 over 6m10s)  kubelet, old-k8s-version-20210813043048-2022292     Node old-k8s-version-20210813043048-2022292 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID              6m10s (x7 over 6m10s)  kubelet, old-k8s-version-20210813043048-2022292     Node old-k8s-version-20210813043048-2022292 status is now: NodeHasSufficientPID
	  Normal   Starting                          5m52s                  kube-proxy, old-k8s-version-20210813043048-2022292  Starting kube-proxy.
	  Warning  FailedNodeAllocatableEnforcement  70s (x6 over 6m10s)    kubelet, old-k8s-version-20210813043048-2022292     Failed to update Node Allocatable Limits ["kubepods"]: failed to set supported cgroup subsystems for cgroup [kubepods]: Failed to set config for supported subsystems : failed to write 0 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/hugetlb.64kB.limit_in_bytes: permission denied
	  Normal   Starting                          8s                     kubelet, old-k8s-version-20210813043048-2022292     Starting kubelet.
	  Normal   NodeHasSufficientMemory           8s                     kubelet, old-k8s-version-20210813043048-2022292     Node old-k8s-version-20210813043048-2022292 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure             8s                     kubelet, old-k8s-version-20210813043048-2022292     Node old-k8s-version-20210813043048-2022292 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID              8s                     kubelet, old-k8s-version-20210813043048-2022292     Node old-k8s-version-20210813043048-2022292 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady                      8s                     kubelet, old-k8s-version-20210813043048-2022292     Node old-k8s-version-20210813043048-2022292 status is now: NodeNotReady
	  Normal   Starting                          6s                     kubelet, old-k8s-version-20210813043048-2022292     Starting kubelet.
	  Normal   NodeHasSufficientMemory           6s                     kubelet, old-k8s-version-20210813043048-2022292     Node old-k8s-version-20210813043048-2022292 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID              6s                     kubelet, old-k8s-version-20210813043048-2022292     Node old-k8s-version-20210813043048-2022292 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure             6s                     kubelet, old-k8s-version-20210813043048-2022292     Node old-k8s-version-20210813043048-2022292 status is now: NodeHasNoDiskPressure
	  Normal   Starting                          5s                     kubelet, old-k8s-version-20210813043048-2022292     Starting kubelet.
	  Normal   NodeHasSufficientMemory           5s                     kubelet, old-k8s-version-20210813043048-2022292     Node old-k8s-version-20210813043048-2022292 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure             5s                     kubelet, old-k8s-version-20210813043048-2022292     Node old-k8s-version-20210813043048-2022292 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID              5s                     kubelet, old-k8s-version-20210813043048-2022292     Node old-k8s-version-20210813043048-2022292 status is now: NodeHasSufficientPID
	  Normal   Starting                          3s                     kubelet, old-k8s-version-20210813043048-2022292     Starting kubelet.
	  Normal   NodeHasSufficientMemory           3s                     kubelet, old-k8s-version-20210813043048-2022292     Node old-k8s-version-20210813043048-2022292 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure             3s                     kubelet, old-k8s-version-20210813043048-2022292     Node old-k8s-version-20210813043048-2022292 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID              3s                     kubelet, old-k8s-version-20210813043048-2022292     Node old-k8s-version-20210813043048-2022292 status is now: NodeHasSufficientPID
	  Normal   Starting                          2s                     kubelet, old-k8s-version-20210813043048-2022292     Starting kubelet.
	  Normal   NodeHasSufficientMemory           2s                     kubelet, old-k8s-version-20210813043048-2022292     Node old-k8s-version-20210813043048-2022292 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure             2s                     kubelet, old-k8s-version-20210813043048-2022292     Node old-k8s-version-20210813043048-2022292 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID              2s                     kubelet, old-k8s-version-20210813043048-2022292     Node old-k8s-version-20210813043048-2022292 status is now: NodeHasSufficientPID
	  Normal   Starting                          0s                     kubelet, old-k8s-version-20210813043048-2022292     Starting kubelet.
	  Normal   NodeHasSufficientMemory           0s                     kubelet, old-k8s-version-20210813043048-2022292     Node old-k8s-version-20210813043048-2022292 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure             0s                     kubelet, old-k8s-version-20210813043048-2022292     Node old-k8s-version-20210813043048-2022292 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID              0s                     kubelet, old-k8s-version-20210813043048-2022292     Node old-k8s-version-20210813043048-2022292 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* [  +0.001061] FS-Cache: O-key=[8] 'ce42040000000000'
	[  +0.000800] FS-Cache: N-cookie c=00000000f491aea5 [p=00000000dc37798f fl=2 nc=0 na=1]
	[  +0.001313] FS-Cache: N-cookie d=0000000029214e1b n=00000000a36cc3bc
	[  +0.001052] FS-Cache: N-key=[8] 'ce42040000000000'
	[Aug13 04:05] FS-Cache: Duplicate cookie detected
	[  +0.000797] FS-Cache: O-cookie c=0000000049ef8e94 [p=00000000dc37798f fl=226 nc=0 na=1]
	[  +0.001324] FS-Cache: O-cookie d=0000000029214e1b n=0000000070677f7f
	[  +0.001053] FS-Cache: O-key=[8] 'ae42040000000000'
	[  +0.000801] FS-Cache: N-cookie c=000000000ce39a3e [p=00000000dc37798f fl=2 nc=0 na=1]
	[  +0.001320] FS-Cache: N-cookie d=0000000029214e1b n=000000005ea55429
	[  +0.001052] FS-Cache: N-key=[8] 'ae42040000000000'
	[  +0.001492] FS-Cache: Duplicate cookie detected
	[  +0.000804] FS-Cache: O-cookie c=00000000ea269615 [p=00000000dc37798f fl=226 nc=0 na=1]
	[  +0.001313] FS-Cache: O-cookie d=0000000029214e1b n=000000000e250366
	[  +0.001056] FS-Cache: O-key=[8] 'ce42040000000000'
	[  +0.000797] FS-Cache: N-cookie c=000000000ce39a3e [p=00000000dc37798f fl=2 nc=0 na=1]
	[  +0.001309] FS-Cache: N-cookie d=0000000029214e1b n=000000001f865425
	[  +0.001050] FS-Cache: N-key=[8] 'ce42040000000000'
	[  +0.001469] FS-Cache: Duplicate cookie detected
	[  +0.000798] FS-Cache: O-cookie c=000000001a114129 [p=00000000dc37798f fl=226 nc=0 na=1]
	[  +0.001324] FS-Cache: O-cookie d=0000000029214e1b n=0000000016cfc1b9
	[  +0.001049] FS-Cache: O-key=[8] 'b042040000000000'
	[  +0.000800] FS-Cache: N-cookie c=000000000ce39a3e [p=00000000dc37798f fl=2 nc=0 na=1]
	[  +0.001305] FS-Cache: N-cookie d=0000000029214e1b n=00000000e873df18
	[  +0.001054] FS-Cache: N-key=[8] 'b042040000000000'
	
	* 
	* ==> etcd [9a2b26f090b88c5b99180a1fdc55c6247d8641f719be70bb66886f837b95636d] <==
	* 2021-08-13 04:34:12.837427 W | auth: simple token is not cryptographically signed
	2021-08-13 04:34:12.839902 I | etcdserver: starting server... [version: 3.3.10, cluster version: to_be_decided]
	2021-08-13 04:34:12.842481 I | etcdserver/membership: added member b2c6679ac05f2cf1 [https://192.168.58.2:2380] to cluster 3a56e4ca95e2355c
	2021-08-13 04:34:12.842554 N | etcdserver/membership: set the initial cluster version to 3.3
	2021-08-13 04:34:12.842577 I | etcdserver/api: enabled capabilities for version 3.3
	2021-08-13 04:34:12.844893 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-13 04:34:12.845095 I | embed: listening for metrics on http://192.168.58.2:2381
	2021-08-13 04:34:12.845251 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-08-13 04:34:14.274882 I | raft: b2c6679ac05f2cf1 is starting a new election at term 2
	2021-08-13 04:34:14.274963 I | raft: b2c6679ac05f2cf1 became candidate at term 3
	2021-08-13 04:34:14.274995 I | raft: b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 3
	2021-08-13 04:34:14.280396 I | raft: b2c6679ac05f2cf1 became leader at term 3
	2021-08-13 04:34:14.280429 I | raft: raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 3
	2021-08-13 04:34:14.292375 I | etcdserver: published {Name:old-k8s-version-20210813043048-2022292 ClientURLs:[https://192.168.58.2:2379]} to cluster 3a56e4ca95e2355c
	2021-08-13 04:34:14.292445 I | embed: ready to serve client requests
	2021-08-13 04:34:14.292505 I | embed: ready to serve client requests
	2021-08-13 04:34:14.294070 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-13 04:34:14.362178 I | embed: serving client requests on 192.168.58.2:2379
	proto: no coders for int
	proto: no encoder for ValueSize int [GetProperties]
	2021-08-13 04:40:10.492537 W | etcdserver: read-only range request "key:\"/registry/minions/old-k8s-version-20210813043048-2022292\" " with result "range_response_count:1 size:3742" took too long (113.844927ms) to execute
	2021-08-13 04:40:12.007622 W | etcdserver: read-only range request "key:\"/registry/minions/old-k8s-version-20210813043048-2022292\" " with result "range_response_count:1 size:3742" took too long (140.809617ms) to execute
	2021-08-13 04:40:15.008854 W | etcdserver: read-only range request "key:\"/registry/minions/old-k8s-version-20210813043048-2022292\" " with result "range_response_count:1 size:3742" took too long (108.683337ms) to execute
	2021-08-13 04:40:16.549990 W | etcdserver: read-only range request "key:\"/registry/minions/old-k8s-version-20210813043048-2022292\" " with result "range_response_count:1 size:3742" took too long (111.871146ms) to execute
	2021-08-13 04:40:16.689490 W | etcdserver: read-only range request "key:\"/registry/minions/old-k8s-version-20210813043048-2022292\" " with result "range_response_count:1 size:3742" took too long (137.020634ms) to execute
	
	* 
	* ==> etcd [ede934da6daa0251467435d6134da5714d2aaaa246642ee4b092bf23b20b8a4d] <==
	* 2021-08-13 04:31:37.808090 I | etcdserver/api: enabled capabilities for version 3.3
	2021-08-13 04:31:37.966692 W | etcdserver: request "ID:3238505112327492611 Method:\"PUT\" Path:\"/0/version\" Val:\"3.3.0\" " with result "" took too long (114.896531ms) to execute
	proto: no coders for int
	proto: no encoder for ValueSize int [GetProperties]
	2021-08-13 04:31:50.570496 W | etcdserver: request "header:<ID:3238505112327493035 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterrolebindings/system:controller:replication-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterrolebindings/system:controller:replication-controller\" value_size:408 >> failure:<>>" with result "size:14" took too long (130.855391ms) to execute
	2021-08-13 04:31:50.570819 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:4" took too long (100.941765ms) to execute
	2021-08-13 04:31:51.147041 W | etcdserver: read-only range request "key:\"/registry/roles/kube-system/system:controller:cloud-provider\" " with result "range_response_count:0 size:5" took too long (141.699039ms) to execute
	2021-08-13 04:31:51.463486 W | etcdserver: read-only range request "key:\"/registry/rolebindings/kube-system/system::leader-locking-kube-controller-manager\" " with result "range_response_count:0 size:5" took too long (128.435378ms) to execute
	2021-08-13 04:31:53.025745 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/resourcequota-controller\" " with result "range_response_count:0 size:5" took too long (117.896656ms) to execute
	2021-08-13 04:31:53.229545 W | etcdserver: request "header:<ID:3238505112327493185 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/secrets/kube-system/resourcequota-controller-token-sr8xz\" mod_revision:0 > success:<request_put:<key:\"/registry/secrets/kube-system/resourcequota-controller-token-sr8xz\" value_size:2397 >> failure:<>>" with result "size:16" took too long (100.486512ms) to execute
	2021-08-13 04:31:53.233342 W | etcdserver: read-only range request "key:\"/registry/events/default/old-k8s-version-20210813043048-2022292.169ac36e083ebb7a\" " with result "range_response_count:1 size:551" took too long (106.833567ms) to execute
	2021-08-13 04:31:53.413984 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (101.518212ms) to execute
	2021-08-13 04:31:53.726516 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/statefulset-controller\" " with result "range_response_count:0 size:5" took too long (151.15078ms) to execute
	2021-08-13 04:31:55.366271 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/attachdetach-controller\" " with result "range_response_count:1 size:214" took too long (173.20919ms) to execute
	2021-08-13 04:31:55.599231 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:178" took too long (195.261657ms) to execute
	2021-08-13 04:31:55.831787 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/pv-protection-controller\" " with result "range_response_count:1 size:216" took too long (152.476608ms) to execute
	2021-08-13 04:31:55.975555 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/persistent-volume-binder\" " with result "range_response_count:0 size:5" took too long (131.820629ms) to execute
	2021-08-13 04:32:09.090464 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-node-lease/default\" " with result "range_response_count:1 size:189" took too long (121.150738ms) to execute
	2021-08-13 04:32:09.090620 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:1 size:173" took too long (117.948054ms) to execute
	2021-08-13 04:32:09.538758 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-fb8b8dccf-6jxrq\" " with result "range_response_count:1 size:1440" took too long (231.490649ms) to execute
	2021-08-13 04:32:09.538809 W | etcdserver: request "header:<ID:3238505112327493522 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/coredns.169ac376942362f9\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns.169ac376942362f9\" value_size:345 lease:3238505112327493143 >> failure:<>>" with result "size:16" took too long (121.564805ms) to execute
	2021-08-13 04:32:09.749317 W | etcdserver: request "header:<ID:3238505112327493531 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/coredns-fb8b8dccf-6jxrq\" mod_revision:360 > success:<request_delete_range:<key:\"/registry/pods/kube-system/coredns-fb8b8dccf-6jxrq\" > > failure:<request_range:<key:\"/registry/pods/kube-system/coredns-fb8b8dccf-6jxrq\" > >>" with result "size:18" took too long (100.955034ms) to execute
	2021-08-13 04:32:10.221490 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/coredns\" " with result "range_response_count:1 size:458" took too long (117.968379ms) to execute
	2021-08-13 04:32:10.221663 W | etcdserver: read-only range request "key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" " with result "range_response_count:0 size:5" took too long (142.571078ms) to execute
	2021-08-13 04:32:11.557373 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/storage-provisioner\" " with result "range_response_count:0 size:5" took too long (116.619745ms) to execute
	
	* 
	* ==> kernel <==
	*  04:40:17 up 14:22,  0 users,  load average: 2.87, 2.15, 2.09
	Linux old-k8s-version-20210813043048-2022292 5.8.0-1041-aws #43~20.04.1-Ubuntu SMP Thu Jul 15 11:03:27 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [d341dc7c44990fd954f53b0a062faebb1aaab9aec1922164f28c6a8b52387f03] <==
	* I0813 04:40:04.155106       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:40:05.159911       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:40:05.160610       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:40:06.160733       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:40:06.160922       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:40:08.459007       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:40:08.459136       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:40:09.459275       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:40:09.459411       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:40:10.464434       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:40:10.464574       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:40:11.464693       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:40:11.464870       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:40:12.468447       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:40:12.468560       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:40:13.468792       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:40:13.468930       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:40:14.469043       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:40:14.469138       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:40:15.469251       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:40:15.469367       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:40:16.469439       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:40:16.469590       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:40:17.469695       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:40:17.469813       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	
	* 
	* ==> kube-apiserver [fa0f286b4358e9cb1e23be3a2c4603853d2e15fe2cb82a1287187209fb40ff26] <==
	* I0813 04:33:10.996657       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:33:10.996789       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:33:11.996898       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:33:11.996991       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:33:12.997098       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:33:12.997223       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:33:13.997336       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:33:13.997456       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:33:14.997563       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:33:14.997788       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:33:15.997879       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:33:15.998013       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:33:16.998134       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:33:16.998337       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:33:17.998420       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:33:17.998549       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:33:18.998656       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:33:18.998770       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:33:19.998879       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:33:19.999042       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:33:20.999148       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:33:20.999355       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:33:21.999472       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0813 04:33:21.999736       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0813 04:33:23.002604       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	
	* 
	* ==> kube-controller-manager [2cc7d2beeb7ce652e2b5700da3ab1c5f73bc77e9ede2226da137650b001baf2c] <==
	* I0813 04:35:18.978303       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"dd0d4c9e-fbef-11eb-96a3-0242c0a83a02", APIVersion:"apps/v1", ResourceVersion:"695", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0813 04:35:18.986425       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 04:35:18.986419       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"dd0d4c9e-fbef-11eb-96a3-0242c0a83a02", APIVersion:"apps/v1", ResourceVersion:"695", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0813 04:35:19.021115       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"dd0a0f5e-fbef-11eb-96a3-0242c0a83a02", APIVersion:"apps/v1", ResourceVersion:"692", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-5b494cc544-n6ssn
	I0813 04:35:19.035789       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"dd0d4c9e-fbef-11eb-96a3-0242c0a83a02", APIVersion:"apps/v1", ResourceVersion:"695", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-5d8978d65d-xmnmv
	E0813 04:35:41.645644       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 04:35:44.227588       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0813 04:36:11.896799       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 04:36:16.228655       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0813 04:36:42.147884       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 04:36:48.229847       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0813 04:37:12.398888       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 04:37:20.231680       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0813 04:37:42.649947       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 04:37:52.232963       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0813 04:38:12.901093       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 04:38:24.233994       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0813 04:38:43.152367       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 04:38:56.235146       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0813 04:39:13.404389       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 04:39:28.236401       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0813 04:39:43.655869       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0813 04:40:00.237782       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	I0813 04:40:09.650288       1 node_lifecycle_controller.go:1009] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	E0813 04:40:13.908449       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	* 
	* ==> kube-proxy [2cab4a5a3763eed528d32e62fb4162c211413444ef90a588ae2383caa8746b65] <==
	* W0813 04:34:24.291897       1 server_others.go:295] Flag proxy-mode="" unknown, assuming iptables proxy
	I0813 04:34:24.312453       1 server_others.go:148] Using iptables Proxier.
	I0813 04:34:24.312622       1 server_others.go:178] Tearing down inactive rules.
	I0813 04:34:24.744786       1 server.go:555] Version: v1.14.0
	I0813 04:34:24.784436       1 config.go:202] Starting service config controller
	I0813 04:34:24.784452       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
	I0813 04:34:24.784602       1 config.go:102] Starting endpoints config controller
	I0813 04:34:24.784607       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
	I0813 04:34:24.884643       1 controller_utils.go:1034] Caches are synced for service config controller
	I0813 04:34:24.884825       1 controller_utils.go:1034] Caches are synced for endpoints config controller
	
	* 
	* ==> kube-proxy [f97ff7f944d495d64caffa30c2356c092fd186015004d469c02f63268e544e07] <==
	* W0813 04:32:16.041359       1 server_others.go:295] Flag proxy-mode="" unknown, assuming iptables proxy
	I0813 04:32:16.052644       1 server_others.go:148] Using iptables Proxier.
	I0813 04:32:16.052851       1 server_others.go:178] Tearing down inactive rules.
	I0813 04:32:17.340779       1 server.go:555] Version: v1.14.0
	I0813 04:32:17.360896       1 config.go:102] Starting endpoints config controller
	I0813 04:32:17.360954       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
	I0813 04:32:17.361002       1 config.go:202] Starting service config controller
	I0813 04:32:17.361053       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
	I0813 04:32:17.461077       1 controller_utils.go:1034] Caches are synced for endpoints config controller
	I0813 04:32:17.461184       1 controller_utils.go:1034] Caches are synced for service config controller
	
	* 
	* ==> kube-scheduler [5f4fe9ac4b5eccc0e25925f5407f0baa4521cdec2a18fca6b79b07c606e35e6b] <==
	* I0813 04:34:10.975461       1 serving.go:319] Generated self-signed cert in-memory
	W0813 04:34:14.395349       1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
	W0813 04:34:14.395368       1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
	W0813 04:34:14.395378       1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
	I0813 04:34:14.397958       1 server.go:142] Version: v1.14.0
	I0813 04:34:14.402192       1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
	W0813 04:34:14.404822       1 authorization.go:47] Authorization is disabled
	W0813 04:34:14.404836       1 authentication.go:55] Authentication is disabled
	I0813 04:34:14.404847       1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251
	I0813 04:34:14.405293       1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
	E0813 04:34:22.463902       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E0813 04:34:22.464242       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E0813 04:34:22.464956       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	E0813 04:34:22.465253       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	I0813 04:34:23.919558       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
	I0813 04:34:24.020407       1 controller_utils.go:1034] Caches are synced for scheduler controller
	
	* 
	* ==> kube-scheduler [ae7249685fc33d11e70d8b4c8dc90a3b3f608ae9e4bb557a1f3744b580e24317] <==
	* W0813 04:31:41.118844       1 authentication.go:55] Authentication is disabled
	I0813 04:31:41.118866       1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251
	I0813 04:31:41.119322       1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
	E0813 04:31:48.328343       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 04:31:48.333317       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 04:31:48.333386       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 04:31:48.333421       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 04:31:48.333463       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 04:31:48.333494       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 04:31:48.333585       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 04:31:48.333663       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 04:31:48.335300       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 04:31:48.347660       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0813 04:31:49.329335       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0813 04:31:49.334343       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0813 04:31:49.335382       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0813 04:31:49.341191       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0813 04:31:49.345426       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0813 04:31:49.352794       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0813 04:31:49.357193       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0813 04:31:49.359889       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0813 04:31:49.361364       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0813 04:31:49.365411       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0813 04:31:51.221914       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
	I0813 04:31:51.322112       1 controller_utils.go:1034] Caches are synced for scheduler controller
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Fri 2021-08-13 04:33:44 UTC, end at Fri 2021-08-13 04:40:19 UTC. --
	Aug 13 04:40:18 old-k8s-version-20210813043048-2022292 kubelet[6631]: I0813 04:40:18.747166    6631 kubelet.go:1823] skipping pod synchronization - [container runtime status check may not have completed yet., PLEG is not healthy: pleg has yet to be successful.]
	Aug 13 04:40:18 old-k8s-version-20210813043048-2022292 kubelet[6631]: I0813 04:40:18.747240    6631 server.go:141] Starting to listen on 0.0.0.0:10250
	Aug 13 04:40:18 old-k8s-version-20210813043048-2022292 kubelet[6631]: I0813 04:40:18.748175    6631 server.go:343] Adding debug handlers to kubelet server.
	Aug 13 04:40:18 old-k8s-version-20210813043048-2022292 kubelet[6631]: I0813 04:40:18.753341    6631 volume_manager.go:248] Starting Kubelet Volume Manager
	Aug 13 04:40:18 old-k8s-version-20210813043048-2022292 kubelet[6631]: I0813 04:40:18.758930    6631 clientconn.go:440] parsed scheme: "unix"
	Aug 13 04:40:18 old-k8s-version-20210813043048-2022292 kubelet[6631]: I0813 04:40:18.758960    6631 clientconn.go:440] scheme "unix" not registered, fallback to default scheme
	Aug 13 04:40:18 old-k8s-version-20210813043048-2022292 kubelet[6631]: I0813 04:40:18.759296    6631 desired_state_of_world_populator.go:130] Desired state populator starts to run
	Aug 13 04:40:18 old-k8s-version-20210813043048-2022292 kubelet[6631]: I0813 04:40:18.759476    6631 asm_arm64.s:1128] ccResolverWrapper: sending new addresses to cc: [{unix:///run/containerd/containerd.sock 0  <nil>}]
	Aug 13 04:40:18 old-k8s-version-20210813043048-2022292 kubelet[6631]: I0813 04:40:18.759507    6631 clientconn.go:796] ClientConn switching balancer to "pick_first"
	Aug 13 04:40:18 old-k8s-version-20210813043048-2022292 kubelet[6631]: I0813 04:40:18.759553    6631 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0x4000d283d0, CONNECTING
	Aug 13 04:40:18 old-k8s-version-20210813043048-2022292 kubelet[6631]: I0813 04:40:18.759855    6631 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0x4000d283d0, READY
	Aug 13 04:40:18 old-k8s-version-20210813043048-2022292 kubelet[6631]: I0813 04:40:18.847944    6631 kubelet.go:1823] skipping pod synchronization - container runtime status check may not have completed yet.
	Aug 13 04:40:18 old-k8s-version-20210813043048-2022292 kubelet[6631]: I0813 04:40:18.857258    6631 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach
	Aug 13 04:40:18 old-k8s-version-20210813043048-2022292 kubelet[6631]: I0813 04:40:18.857889    6631 kuberuntime_manager.go:946] updating runtime config through cri with podcidr 10.244.0.0/24
	Aug 13 04:40:18 old-k8s-version-20210813043048-2022292 kubelet[6631]: I0813 04:40:18.862881    6631 kubelet_node_status.go:72] Attempting to register node old-k8s-version-20210813043048-2022292
	Aug 13 04:40:18 old-k8s-version-20210813043048-2022292 kubelet[6631]: I0813 04:40:18.864005    6631 kubelet_network.go:77] Setting Pod CIDR:  -> 10.244.0.0/24
	Aug 13 04:40:18 old-k8s-version-20210813043048-2022292 kubelet[6631]: I0813 04:40:18.952407    6631 kubelet_node_status.go:114] Node old-k8s-version-20210813043048-2022292 was previously registered
	Aug 13 04:40:18 old-k8s-version-20210813043048-2022292 kubelet[6631]: I0813 04:40:18.953352    6631 kubelet_node_status.go:75] Successfully registered node old-k8s-version-20210813043048-2022292
	Aug 13 04:40:19 old-k8s-version-20210813043048-2022292 kubelet[6631]: I0813 04:40:19.048303    6631 kubelet.go:1823] skipping pod synchronization - container runtime status check may not have completed yet.
	Aug 13 04:40:19 old-k8s-version-20210813043048-2022292 kubelet[6631]: I0813 04:40:19.056225    6631 cpu_manager.go:155] [cpumanager] starting with none policy
	Aug 13 04:40:19 old-k8s-version-20210813043048-2022292 kubelet[6631]: I0813 04:40:19.056242    6631 cpu_manager.go:156] [cpumanager] reconciling every 10s
	Aug 13 04:40:19 old-k8s-version-20210813043048-2022292 kubelet[6631]: I0813 04:40:19.056251    6631 policy_none.go:42] [cpumanager] none policy: Start
	Aug 13 04:40:19 old-k8s-version-20210813043048-2022292 kubelet[6631]: F0813 04:40:19.057713    6631 kubelet.go:1359] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: Failed to set config for supported subsystems : failed to write 4611686018427387904 to hugetlb.64kB.limit_in_bytes: open /sys/fs/cgroup/hugetlb/kubepods/burstable/hugetlb.64kB.limit_in_bytes: permission denied
	Aug 13 04:40:19 old-k8s-version-20210813043048-2022292 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 13 04:40:19 old-k8s-version-20210813043048-2022292 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	* 
	* ==> kubernetes-dashboard [e11993011a0ace13a844916905a50865343256deec1aed625969e72928ed06a3] <==
	* 2021/08/13 04:35:19 Using namespace: kubernetes-dashboard
	2021/08/13 04:35:19 Using in-cluster config to connect to apiserver
	2021/08/13 04:35:19 Using secret token for csrf signing
	2021/08/13 04:35:19 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/13 04:35:19 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/13 04:35:19 Successful initial request to the apiserver, version: v1.14.0
	2021/08/13 04:35:19 Generating JWE encryption key
	2021/08/13 04:35:19 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/13 04:35:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/13 04:35:20 Initializing JWE encryption key from synchronized object
	2021/08/13 04:35:20 Creating in-cluster Sidecar client
	2021/08/13 04:35:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 04:35:20 Serving insecurely on HTTP port: 9090
	2021/08/13 04:35:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 04:36:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 04:36:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 04:37:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 04:37:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 04:38:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 04:38:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 04:39:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 04:39:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/13 04:35:19 Starting overwatch
	
	* 
	* ==> storage-provisioner [3a77288a95c8f00f74ab0ac8c3d250db01f7e14abc6a199fc27a47901cda01f0] <==
	* I0813 04:34:24.283465       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0813 04:34:54.286079       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> storage-provisioner [915c40344b615927958968d753f3538c387ca316332f25ad8005f07afcdf921c] <==
	* I0813 04:35:09.405858       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0813 04:35:09.420206       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0813 04:35:09.420254       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0813 04:35:26.812051       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0813 04:35:26.812644       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20210813043048-2022292_5268b0aa-4379-4687-a31f-891eff7dac7a!
	I0813 04:35:26.812813       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6d4bd5ea-fbef-11eb-a893-0242577d9e8b", APIVersion:"v1", ResourceVersion:"768", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-20210813043048-2022292_5268b0aa-4379-4687-a31f-891eff7dac7a became leader
	I0813 04:35:26.912753       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20210813043048-2022292_5268b0aa-4379-4687-a31f-891eff7dac7a!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-20210813043048-2022292 -n old-k8s-version-20210813043048-2022292
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-20210813043048-2022292 -n old-k8s-version-20210813043048-2022292: exit status 2 (473.564641ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:262: (dbg) Run:  kubectl --context old-k8s-version-20210813043048-2022292 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: metrics-server-8546d8b77b-snn6r
helpers_test.go:273: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20210813043048-2022292 describe pod metrics-server-8546d8b77b-snn6r
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context old-k8s-version-20210813043048-2022292 describe pod metrics-server-8546d8b77b-snn6r: exit status 1 (107.559329ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-8546d8b77b-snn6r" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context old-k8s-version-20210813043048-2022292 describe pod metrics-server-8546d8b77b-snn6r: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (14.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (553.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p cilium-20210813042828-2022292 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=containerd
E0813 04:52:01.447537 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.crt: no such file or directory
E0813 04:52:31.431369 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813044024-2022292/client.crt: no such file or directory
E0813 04:52:31.436574 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813044024-2022292/client.crt: no such file or directory
E0813 04:52:31.446684 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813044024-2022292/client.crt: no such file or directory
E0813 04:52:31.467183 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813044024-2022292/client.crt: no such file or directory
E0813 04:52:31.507296 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813044024-2022292/client.crt: no such file or directory
E0813 04:52:31.587478 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813044024-2022292/client.crt: no such file or directory
E0813 04:52:31.747745 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813044024-2022292/client.crt: no such file or directory
E0813 04:52:32.068355 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813044024-2022292/client.crt: no such file or directory
E0813 04:52:32.709217 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813044024-2022292/client.crt: no such file or directory
E0813 04:52:33.989645 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813044024-2022292/client.crt: no such file or directory
E0813 04:52:36.550133 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813044024-2022292/client.crt: no such file or directory
E0813 04:52:41.670805 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813044024-2022292/client.crt: no such file or directory
E0813 04:52:51.911887 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813044024-2022292/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:98: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p cilium-20210813042828-2022292 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=containerd: exit status 80 (9m13.83112449s)

                                                
                                                
-- stdout --
	* [cilium-20210813042828-2022292] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=12230
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node cilium-20210813042828-2022292 in cluster cilium-20210813042828-2022292
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.21.3 on containerd 1.4.6 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Cilium (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: default-storageclass, storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 04:51:55.266192 2203249 out.go:298] Setting OutFile to fd 1 ...
	I0813 04:51:55.266297 2203249 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 04:51:55.266307 2203249 out.go:311] Setting ErrFile to fd 2...
	I0813 04:51:55.266311 2203249 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 04:51:55.266452 2203249 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0813 04:51:55.266739 2203249 out.go:305] Setting JSON to false
	I0813 04:51:55.267624 2203249 start.go:111] hostinfo: {"hostname":"ip-172-31-30-239","uptime":52459,"bootTime":1628777856,"procs":291,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0813 04:51:55.267694 2203249 start.go:121] virtualization:  
	I0813 04:51:55.270607 2203249 out.go:177] * [cilium-20210813042828-2022292] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0813 04:51:55.272813 2203249 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 04:51:55.271678 2203249 notify.go:169] Checking for updates...
	I0813 04:51:55.275257 2203249 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 04:51:55.277069 2203249 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0813 04:51:55.279137 2203249 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0813 04:51:55.279691 2203249 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 04:51:55.352552 2203249 docker.go:132] docker version: linux-20.10.8
	I0813 04:51:55.352643 2203249 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 04:51:55.488489 2203249 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:19 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-13 04:51:55.417149058 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0813 04:51:55.488593 2203249 docker.go:244] overlay module found
	I0813 04:51:55.490568 2203249 out.go:177] * Using the docker driver based on user configuration
	I0813 04:51:55.490590 2203249 start.go:278] selected driver: docker
	I0813 04:51:55.490595 2203249 start.go:751] validating driver "docker" against <nil>
	I0813 04:51:55.490613 2203249 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 04:51:55.490664 2203249 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 04:51:55.490683 2203249 out.go:242] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0813 04:51:55.492863 2203249 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 04:51:55.493145 2203249 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 04:51:55.597841 2203249 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:19 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-13 04:51:55.53770889 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0813 04:51:55.597941 2203249 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 04:51:55.598097 2203249 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 04:51:55.598112 2203249 cni.go:93] Creating CNI manager for "cilium"
	I0813 04:51:55.598119 2203249 start_flags.go:272] Found "Cilium" CNI - setting NetworkPlugin=cni
	I0813 04:51:55.598126 2203249 start_flags.go:277] config:
	{Name:cilium-20210813042828-2022292 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:cilium-20210813042828-2022292 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 04:51:55.600384 2203249 out.go:177] * Starting control plane node cilium-20210813042828-2022292 in cluster cilium-20210813042828-2022292
	I0813 04:51:55.600413 2203249 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0813 04:51:55.602270 2203249 out.go:177] * Pulling base image ...
	I0813 04:51:55.602288 2203249 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 04:51:55.602314 2203249 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4
	I0813 04:51:55.602336 2203249 cache.go:56] Caching tarball of preloaded images
	I0813 04:51:55.602458 2203249 preload.go:173] Found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0813 04:51:55.602474 2203249 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0813 04:51:55.602564 2203249 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813042828-2022292/config.json ...
	I0813 04:51:55.602582 2203249 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813042828-2022292/config.json: {Name:mk34128fdcfd7e7e6904f68e85ef7c5e24512367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 04:51:55.602689 2203249 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0813 04:51:55.652895 2203249 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0813 04:51:55.652919 2203249 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0813 04:51:55.652930 2203249 cache.go:205] Successfully downloaded all kic artifacts
	I0813 04:51:55.652953 2203249 start.go:313] acquiring machines lock for cilium-20210813042828-2022292: {Name:mkee9c820d99e8fa7b7ecb8d977bb95c7717478c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 04:51:55.653509 2203249 start.go:317] acquired machines lock for "cilium-20210813042828-2022292" in 540.467µs
	I0813 04:51:55.653549 2203249 start.go:89] Provisioning new machine with config: &{Name:cilium-20210813042828-2022292 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:cilium-20210813042828-2022292 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 04:51:55.653616 2203249 start.go:126] createHost starting for "" (driver="docker")
	I0813 04:51:55.656694 2203249 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0813 04:51:55.656928 2203249 start.go:160] libmachine.API.Create for "cilium-20210813042828-2022292" (driver="docker")
	I0813 04:51:55.656950 2203249 client.go:168] LocalClient.Create starting
	I0813 04:51:55.656998 2203249 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem
	I0813 04:51:55.657021 2203249 main.go:130] libmachine: Decoding PEM data...
	I0813 04:51:55.657038 2203249 main.go:130] libmachine: Parsing certificate...
	I0813 04:51:55.657141 2203249 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem
	I0813 04:51:55.657156 2203249 main.go:130] libmachine: Decoding PEM data...
	I0813 04:51:55.657167 2203249 main.go:130] libmachine: Parsing certificate...
	I0813 04:51:55.657493 2203249 cli_runner.go:115] Run: docker network inspect cilium-20210813042828-2022292 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0813 04:51:55.712445 2203249 cli_runner.go:162] docker network inspect cilium-20210813042828-2022292 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0813 04:51:55.712506 2203249 network_create.go:255] running [docker network inspect cilium-20210813042828-2022292] to gather additional debugging logs...
	I0813 04:51:55.712520 2203249 cli_runner.go:115] Run: docker network inspect cilium-20210813042828-2022292
	W0813 04:51:55.746884 2203249 cli_runner.go:162] docker network inspect cilium-20210813042828-2022292 returned with exit code 1
	I0813 04:51:55.746907 2203249 network_create.go:258] error running [docker network inspect cilium-20210813042828-2022292]: docker network inspect cilium-20210813042828-2022292: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: cilium-20210813042828-2022292
	I0813 04:51:55.746917 2203249 network_create.go:260] output of [docker network inspect cilium-20210813042828-2022292]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: cilium-20210813042828-2022292
	
	** /stderr **
	I0813 04:51:55.746977 2203249 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 04:51:55.783510 2203249 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0x40005fd048] misses:0}
	I0813 04:51:55.783551 2203249 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 04:51:55.783567 2203249 network_create.go:106] attempt to create docker network cilium-20210813042828-2022292 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0813 04:51:55.783611 2203249 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true cilium-20210813042828-2022292
	I0813 04:51:55.875807 2203249 network_create.go:90] docker network cilium-20210813042828-2022292 192.168.49.0/24 created
	I0813 04:51:55.875836 2203249 kic.go:106] calculated static IP "192.168.49.2" for the "cilium-20210813042828-2022292" container
	I0813 04:51:55.875904 2203249 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0813 04:51:55.919708 2203249 cli_runner.go:115] Run: docker volume create cilium-20210813042828-2022292 --label name.minikube.sigs.k8s.io=cilium-20210813042828-2022292 --label created_by.minikube.sigs.k8s.io=true
	I0813 04:51:55.972790 2203249 oci.go:102] Successfully created a docker volume cilium-20210813042828-2022292
	I0813 04:51:55.972864 2203249 cli_runner.go:115] Run: docker run --rm --name cilium-20210813042828-2022292-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20210813042828-2022292 --entrypoint /usr/bin/test -v cilium-20210813042828-2022292:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
	I0813 04:51:56.611949 2203249 oci.go:106] Successfully prepared a docker volume cilium-20210813042828-2022292
	W0813 04:51:56.611989 2203249 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0813 04:51:56.611997 2203249 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0813 04:51:56.612046 2203249 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0813 04:51:56.612250 2203249 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 04:51:56.612273 2203249 kic.go:179] Starting extracting preloaded images to volume ...
	I0813 04:51:56.612315 2203249 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v cilium-20210813042828-2022292:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0813 04:51:56.785233 2203249 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-20210813042828-2022292 --name cilium-20210813042828-2022292 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-20210813042828-2022292 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-20210813042828-2022292 --network cilium-20210813042828-2022292 --ip 192.168.49.2 --volume cilium-20210813042828-2022292:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0813 04:51:57.430678 2203249 cli_runner.go:115] Run: docker container inspect cilium-20210813042828-2022292 --format={{.State.Running}}
	I0813 04:51:57.490825 2203249 cli_runner.go:115] Run: docker container inspect cilium-20210813042828-2022292 --format={{.State.Status}}
	I0813 04:51:57.548529 2203249 cli_runner.go:115] Run: docker exec cilium-20210813042828-2022292 stat /var/lib/dpkg/alternatives/iptables
	I0813 04:51:57.636462 2203249 oci.go:278] the created container "cilium-20210813042828-2022292" has a running status.
	I0813 04:51:57.636489 2203249 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/cilium-20210813042828-2022292/id_rsa...
	I0813 04:51:58.328476 2203249 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/cilium-20210813042828-2022292/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0813 04:51:58.460637 2203249 cli_runner.go:115] Run: docker container inspect cilium-20210813042828-2022292 --format={{.State.Status}}
	I0813 04:51:58.513129 2203249 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0813 04:51:58.513146 2203249 kic_runner.go:115] Args: [docker exec --privileged cilium-20210813042828-2022292 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0813 04:52:10.999583 2203249 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v cilium-20210813042828-2022292:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: (14.387223042s)
	I0813 04:52:10.999623 2203249 kic.go:188] duration metric: took 14.387352 seconds to extract preloaded images to volume
	I0813 04:52:10.999694 2203249 cli_runner.go:115] Run: docker container inspect cilium-20210813042828-2022292 --format={{.State.Status}}
	I0813 04:52:11.050418 2203249 machine.go:88] provisioning docker machine ...
	I0813 04:52:11.050455 2203249 ubuntu.go:169] provisioning hostname "cilium-20210813042828-2022292"
	I0813 04:52:11.050512 2203249 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210813042828-2022292
	I0813 04:52:11.079986 2203249 main.go:130] libmachine: Using SSH client type: native
	I0813 04:52:11.080179 2203249 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 51006 <nil> <nil>}
	I0813 04:52:11.080200 2203249 main.go:130] libmachine: About to run SSH command:
	sudo hostname cilium-20210813042828-2022292 && echo "cilium-20210813042828-2022292" | sudo tee /etc/hostname
	I0813 04:52:11.199393 2203249 main.go:130] libmachine: SSH cmd err, output: <nil>: cilium-20210813042828-2022292
	
	I0813 04:52:11.199460 2203249 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210813042828-2022292
	I0813 04:52:11.229737 2203249 main.go:130] libmachine: Using SSH client type: native
	I0813 04:52:11.229906 2203249 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 51006 <nil> <nil>}
	I0813 04:52:11.229933 2203249 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scilium-20210813042828-2022292' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cilium-20210813042828-2022292/g' /etc/hosts;
				else 
					echo '127.0.1.1 cilium-20210813042828-2022292' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 04:52:11.339167 2203249 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 04:52:11.339215 2203249 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e6
89d34b/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube}
	I0813 04:52:11.339246 2203249 ubuntu.go:177] setting up certificates
	I0813 04:52:11.339254 2203249 provision.go:83] configureAuth start
	I0813 04:52:11.339315 2203249 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20210813042828-2022292
	I0813 04:52:11.368344 2203249 provision.go:137] copyHostCerts
	I0813 04:52:11.368401 2203249 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem, removing ...
	I0813 04:52:11.368413 2203249 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem
	I0813 04:52:11.368463 2203249 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem (1123 bytes)
	I0813 04:52:11.368543 2203249 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem, removing ...
	I0813 04:52:11.368555 2203249 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem
	I0813 04:52:11.368579 2203249 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem (1679 bytes)
	I0813 04:52:11.368634 2203249 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem, removing ...
	I0813 04:52:11.368645 2203249 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem
	I0813 04:52:11.368669 2203249 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem (1078 bytes)
	I0813 04:52:11.368718 2203249 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem org=jenkins.cilium-20210813042828-2022292 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube cilium-20210813042828-2022292]
	I0813 04:52:11.925383 2203249 provision.go:171] copyRemoteCerts
	I0813 04:52:11.925440 2203249 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 04:52:11.925482 2203249 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210813042828-2022292
	I0813 04:52:11.954988 2203249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51006 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/cilium-20210813042828-2022292/id_rsa Username:docker}
	I0813 04:52:12.038316 2203249 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
	I0813 04:52:12.053419 2203249 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0813 04:52:12.067783 2203249 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 04:52:12.082449 2203249 provision.go:86] duration metric: configureAuth took 743.186621ms
	I0813 04:52:12.082465 2203249 ubuntu.go:193] setting minikube options for container-runtime
	I0813 04:52:12.082608 2203249 machine.go:91] provisioned docker machine in 1.032166869s
	I0813 04:52:12.082614 2203249 client.go:171] LocalClient.Create took 16.425659303s
	I0813 04:52:12.082633 2203249 start.go:168] duration metric: libmachine.API.Create for "cilium-20210813042828-2022292" took 16.425704054s
	I0813 04:52:12.082641 2203249 start.go:267] post-start starting for "cilium-20210813042828-2022292" (driver="docker")
	I0813 04:52:12.082646 2203249 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 04:52:12.082686 2203249 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 04:52:12.082726 2203249 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210813042828-2022292
	I0813 04:52:12.113269 2203249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51006 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/cilium-20210813042828-2022292/id_rsa Username:docker}
	I0813 04:52:12.194473 2203249 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 04:52:12.196903 2203249 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 04:52:12.196928 2203249 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 04:52:12.196942 2203249 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 04:52:12.196951 2203249 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 04:52:12.196964 2203249 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/addons for local assets ...
	I0813 04:52:12.197005 2203249 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files for local assets ...
	I0813 04:52:12.197095 2203249 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/20222922.pem -> 20222922.pem in /etc/ssl/certs
	I0813 04:52:12.197190 2203249 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 04:52:12.202894 2203249 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/20222922.pem --> /etc/ssl/certs/20222922.pem (1708 bytes)
	I0813 04:52:12.217812 2203249 start.go:270] post-start completed in 135.162438ms
	I0813 04:52:12.218110 2203249 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20210813042828-2022292
	I0813 04:52:12.247956 2203249 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813042828-2022292/config.json ...
	I0813 04:52:12.248146 2203249 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 04:52:12.248193 2203249 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210813042828-2022292
	I0813 04:52:12.277021 2203249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51006 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/cilium-20210813042828-2022292/id_rsa Username:docker}
	I0813 04:52:12.363932 2203249 start.go:129] duration metric: createHost completed in 16.710305794s
	I0813 04:52:12.363956 2203249 start.go:80] releasing machines lock for "cilium-20210813042828-2022292", held for 16.710429238s
	I0813 04:52:12.364025 2203249 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-20210813042828-2022292
	I0813 04:52:12.393053 2203249 ssh_runner.go:149] Run: systemctl --version
	I0813 04:52:12.393101 2203249 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210813042828-2022292
	I0813 04:52:12.393330 2203249 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 04:52:12.393385 2203249 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210813042828-2022292
	I0813 04:52:12.434969 2203249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51006 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/cilium-20210813042828-2022292/id_rsa Username:docker}
	I0813 04:52:12.444530 2203249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51006 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/cilium-20210813042828-2022292/id_rsa Username:docker}
	I0813 04:52:12.515706 2203249 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0813 04:52:12.650427 2203249 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0813 04:52:12.658438 2203249 docker.go:153] disabling docker service ...
	I0813 04:52:12.658507 2203249 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 04:52:12.675803 2203249 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 04:52:12.684110 2203249 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 04:52:12.763117 2203249 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 04:52:12.842273 2203249 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 04:52:12.850633 2203249 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 04:52:12.861769 2203249 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY29udGFpbmV
yZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kIgogICAgICB
jb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuc2NoZWR1bGVyXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0813 04:52:12.873164 2203249 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 04:52:12.878899 2203249 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 04:52:12.884450 2203249 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 04:52:12.965201 2203249 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 04:52:13.034192 2203249 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0813 04:52:13.034298 2203249 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 04:52:13.037964 2203249 start.go:417] Will wait 60s for crictl version
	I0813 04:52:13.038036 2203249 ssh_runner.go:149] Run: sudo crictl version
	I0813 04:52:13.064813 2203249 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-13T04:52:13Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0813 04:52:24.112473 2203249 ssh_runner.go:149] Run: sudo crictl version
	I0813 04:52:24.152022 2203249 start.go:426] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.6
	RuntimeApiVersion:  v1alpha2
	I0813 04:52:24.152089 2203249 ssh_runner.go:149] Run: containerd --version
	I0813 04:52:24.201133 2203249 ssh_runner.go:149] Run: containerd --version
	I0813 04:52:24.225916 2203249 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.6 ...
	I0813 04:52:24.225987 2203249 cli_runner.go:115] Run: docker network inspect cilium-20210813042828-2022292 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 04:52:24.266588 2203249 ssh_runner.go:149] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0813 04:52:24.271335 2203249 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 04:52:24.281914 2203249 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 04:52:24.281980 2203249 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 04:52:24.313912 2203249 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 04:52:24.313935 2203249 containerd.go:517] Images already preloaded, skipping extraction
	I0813 04:52:24.313979 2203249 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 04:52:24.339455 2203249 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 04:52:24.339476 2203249 cache_images.go:74] Images are preloaded, skipping loading
	I0813 04:52:24.339532 2203249 ssh_runner.go:149] Run: sudo crictl info
	I0813 04:52:24.371861 2203249 cni.go:93] Creating CNI manager for "cilium"
	I0813 04:52:24.371884 2203249 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 04:52:24.371896 2203249 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cilium-20210813042828-2022292 NodeName:cilium-20210813042828-2022292 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFil
e:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 04:52:24.372018 2203249 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "cilium-20210813042828-2022292"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 04:52:24.372104 2203249 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=cilium-20210813042828-2022292 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:cilium-20210813042828-2022292 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:}
	I0813 04:52:24.372159 2203249 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 04:52:24.379114 2203249 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 04:52:24.379164 2203249 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 04:52:24.387123 2203249 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (543 bytes)
	I0813 04:52:24.401472 2203249 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 04:52:24.414778 2203249 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2079 bytes)
	I0813 04:52:24.431227 2203249 ssh_runner.go:149] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0813 04:52:24.434022 2203249 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 04:52:24.442313 2203249 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813042828-2022292 for IP: 192.168.49.2
	I0813 04:52:24.442359 2203249 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key
	I0813 04:52:24.442382 2203249 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key
	I0813 04:52:24.442430 2203249 certs.go:294] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813042828-2022292/client.key
	I0813 04:52:24.442438 2203249 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813042828-2022292/client.crt with IP's: []
	I0813 04:52:25.154435 2203249 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813042828-2022292/client.crt ...
	I0813 04:52:25.154465 2203249 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813042828-2022292/client.crt: {Name:mk427b7fba7e1025e0f01d0aab17012471ff6d4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 04:52:25.154650 2203249 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813042828-2022292/client.key ...
	I0813 04:52:25.154668 2203249 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813042828-2022292/client.key: {Name:mkd10c94a064c63621d11c40c5965ca388e24915 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 04:52:25.154759 2203249 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813042828-2022292/apiserver.key.dd3b5fb2
	I0813 04:52:25.154773 2203249 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813042828-2022292/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 04:52:25.475540 2203249 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813042828-2022292/apiserver.crt.dd3b5fb2 ...
	I0813 04:52:25.475568 2203249 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813042828-2022292/apiserver.crt.dd3b5fb2: {Name:mk2baf046b3e29d50a44a405a1bbd091c009b69c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 04:52:25.475738 2203249 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813042828-2022292/apiserver.key.dd3b5fb2 ...
	I0813 04:52:25.475753 2203249 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813042828-2022292/apiserver.key.dd3b5fb2: {Name:mk03a16f7100e831148e9d68d9c015a52881009a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 04:52:25.476728 2203249 certs.go:305] copying /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813042828-2022292/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813042828-2022292/apiserver.crt
	I0813 04:52:25.476794 2203249 certs.go:309] copying /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813042828-2022292/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813042828-2022292/apiserver.key
	I0813 04:52:25.476844 2203249 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813042828-2022292/proxy-client.key
	I0813 04:52:25.476855 2203249 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813042828-2022292/proxy-client.crt with IP's: []
	I0813 04:52:26.743089 2203249 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813042828-2022292/proxy-client.crt ...
	I0813 04:52:26.743136 2203249 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813042828-2022292/proxy-client.crt: {Name:mk01af5815a3b906469c9d969e40d0880b1c795e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 04:52:26.743353 2203249 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813042828-2022292/proxy-client.key ...
	I0813 04:52:26.743383 2203249 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813042828-2022292/proxy-client.key: {Name:mka2ab4565238c71ae9328a58d04c4b192e644d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 04:52:26.743615 2203249 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/2022292.pem (1338 bytes)
	W0813 04:52:26.743675 2203249 certs.go:369] ignoring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/2022292_empty.pem, impossibly tiny 0 bytes
	I0813 04:52:26.743700 2203249 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem (1675 bytes)
	I0813 04:52:26.743751 2203249 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem (1078 bytes)
	I0813 04:52:26.743797 2203249 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem (1123 bytes)
	I0813 04:52:26.743835 2203249 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem (1679 bytes)
	I0813 04:52:26.743904 2203249 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/20222922.pem (1708 bytes)
	I0813 04:52:26.744999 2203249 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813042828-2022292/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 04:52:26.765483 2203249 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813042828-2022292/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0813 04:52:26.790936 2203249 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813042828-2022292/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 04:52:26.811850 2203249 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/cilium-20210813042828-2022292/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 04:52:26.829357 2203249 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 04:52:26.845069 2203249 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 04:52:26.862881 2203249 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 04:52:26.882243 2203249 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0813 04:52:26.899592 2203249 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/2022292.pem --> /usr/share/ca-certificates/2022292.pem (1338 bytes)
	I0813 04:52:26.924602 2203249 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/20222922.pem --> /usr/share/ca-certificates/20222922.pem (1708 bytes)
	I0813 04:52:26.948404 2203249 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 04:52:26.973606 2203249 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 04:52:26.992662 2203249 ssh_runner.go:149] Run: openssl version
	I0813 04:52:27.001839 2203249 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 04:52:27.017818 2203249 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 04:52:27.021821 2203249 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 13 03:30 /usr/share/ca-certificates/minikubeCA.pem
	I0813 04:52:27.021888 2203249 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 04:52:27.028951 2203249 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 04:52:27.037313 2203249 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2022292.pem && ln -fs /usr/share/ca-certificates/2022292.pem /etc/ssl/certs/2022292.pem"
	I0813 04:52:27.045224 2203249 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2022292.pem
	I0813 04:52:27.048280 2203249 certs.go:416] hashing: -rw-r--r-- 1 root root 1338 Aug 13 03:55 /usr/share/ca-certificates/2022292.pem
	I0813 04:52:27.048372 2203249 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2022292.pem
	I0813 04:52:27.057516 2203249 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2022292.pem /etc/ssl/certs/51391683.0"
	I0813 04:52:27.065339 2203249 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20222922.pem && ln -fs /usr/share/ca-certificates/20222922.pem /etc/ssl/certs/20222922.pem"
	I0813 04:52:27.072648 2203249 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/20222922.pem
	I0813 04:52:27.076741 2203249 certs.go:416] hashing: -rw-r--r-- 1 root root 1708 Aug 13 03:55 /usr/share/ca-certificates/20222922.pem
	I0813 04:52:27.076834 2203249 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20222922.pem
	I0813 04:52:27.083219 2203249 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20222922.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 04:52:27.090068 2203249 kubeadm.go:390] StartCluster: {Name:cilium-20210813042828-2022292 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:cilium-20210813042828-2022292 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 04:52:27.090156 2203249 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0813 04:52:27.090197 2203249 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 04:52:27.138840 2203249 cri.go:76] found id: ""
	I0813 04:52:27.138904 2203249 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 04:52:27.150651 2203249 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 04:52:27.162859 2203249 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0813 04:52:27.162911 2203249 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 04:52:27.172184 2203249 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 04:52:27.172223 2203249 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0813 04:52:27.605378 2203249 out.go:204]   - Generating certificates and keys ...
	I0813 04:52:34.470517 2203249 out.go:204]   - Booting up control plane ...
	I0813 04:52:54.037934 2203249 out.go:204]   - Configuring RBAC rules ...
	I0813 04:52:54.654593 2203249 cni.go:93] Creating CNI manager for "cilium"
	I0813 04:52:54.657879 2203249 out.go:177] * Configuring Cilium (Container Networking Interface) ...
	I0813 04:52:54.657942 2203249 ssh_runner.go:149] Run: sudo /bin/bash -c "grep 'bpffs /sys/fs/bpf' /proc/mounts || sudo mount bpffs -t bpf /sys/fs/bpf"
	I0813 04:52:54.719621 2203249 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0813 04:52:54.719642 2203249 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (18465 bytes)
	I0813 04:52:54.748928 2203249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 04:52:55.684282 2203249 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 04:52:55.684423 2203249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:52:55.684477 2203249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=dc1c3ca26e9449ce488a773126b8450402c94a19 minikube.k8s.io/name=cilium-20210813042828-2022292 minikube.k8s.io/updated_at=2021_08_13T04_52_55_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:52:55.877331 2203249 ops.go:34] apiserver oom_adj: -16
	I0813 04:52:55.877463 2203249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:52:56.476094 2203249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:52:56.975580 2203249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:52:57.475744 2203249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:52:57.975960 2203249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:52:58.475575 2203249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:52:58.976188 2203249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:52:59.476265 2203249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:52:59.975946 2203249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:53:00.476003 2203249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:53:00.975603 2203249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:53:01.476129 2203249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:53:01.975599 2203249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:53:02.475798 2203249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:53:02.976411 2203249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:53:03.475965 2203249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:53:03.976113 2203249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:53:04.476239 2203249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:53:04.975601 2203249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:53:05.475554 2203249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:53:05.976498 2203249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:53:06.476059 2203249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:53:06.976407 2203249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:53:07.475922 2203249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:53:07.975583 2203249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:53:08.162908 2203249 kubeadm.go:985] duration metric: took 12.478522671s to wait for elevateKubeSystemPrivileges.
	I0813 04:53:08.162930 2203249 kubeadm.go:392] StartCluster complete in 41.072868967s
	I0813 04:53:08.162946 2203249 settings.go:142] acquiring lock: {Name:mke0b9bf6059169e73bfde24fe8e8162c3ec0654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 04:53:08.163019 2203249 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 04:53:08.164572 2203249 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig: {Name:mk6797826f33680e9cda7cd38a7adfcabda9681c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 04:53:08.710434 2203249 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "cilium-20210813042828-2022292" rescaled to 1
	I0813 04:53:08.710513 2203249 start.go:226] Will wait 5m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 04:53:08.712544 2203249 out.go:177] * Verifying Kubernetes components...
	I0813 04:53:08.710729 2203249 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0813 04:53:08.710527 2203249 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 04:53:08.712668 2203249 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 04:53:08.712741 2203249 addons.go:59] Setting storage-provisioner=true in profile "cilium-20210813042828-2022292"
	I0813 04:53:08.712923 2203249 addons.go:135] Setting addon storage-provisioner=true in "cilium-20210813042828-2022292"
	W0813 04:53:08.712961 2203249 addons.go:147] addon storage-provisioner should already be in state true
	I0813 04:53:08.712750 2203249 addons.go:59] Setting default-storageclass=true in profile "cilium-20210813042828-2022292"
	I0813 04:53:08.713030 2203249 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cilium-20210813042828-2022292"
	I0813 04:53:08.713415 2203249 cli_runner.go:115] Run: docker container inspect cilium-20210813042828-2022292 --format={{.State.Status}}
	I0813 04:53:08.713559 2203249 host.go:66] Checking if "cilium-20210813042828-2022292" exists ...
	I0813 04:53:08.714693 2203249 cli_runner.go:115] Run: docker container inspect cilium-20210813042828-2022292 --format={{.State.Status}}
	I0813 04:53:08.796872 2203249 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 04:53:08.796991 2203249 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 04:53:08.797003 2203249 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 04:53:08.797073 2203249 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210813042828-2022292
	I0813 04:53:08.797864 2203249 addons.go:135] Setting addon default-storageclass=true in "cilium-20210813042828-2022292"
	W0813 04:53:08.797884 2203249 addons.go:147] addon default-storageclass should already be in state true
	I0813 04:53:08.797907 2203249 host.go:66] Checking if "cilium-20210813042828-2022292" exists ...
	I0813 04:53:08.798381 2203249 cli_runner.go:115] Run: docker container inspect cilium-20210813042828-2022292 --format={{.State.Status}}
	I0813 04:53:08.866674 2203249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51006 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/cilium-20210813042828-2022292/id_rsa Username:docker}
	I0813 04:53:08.867926 2203249 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 04:53:08.867945 2203249 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 04:53:08.867994 2203249 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210813042828-2022292
	I0813 04:53:08.931940 2203249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51006 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/cilium-20210813042828-2022292/id_rsa Username:docker}
	I0813 04:53:08.965617 2203249 node_ready.go:35] waiting up to 5m0s for node "cilium-20210813042828-2022292" to be "Ready" ...
	I0813 04:53:08.965876 2203249 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 04:53:08.974048 2203249 node_ready.go:49] node "cilium-20210813042828-2022292" has status "Ready":"True"
	I0813 04:53:08.974071 2203249 node_ready.go:38] duration metric: took 8.427886ms waiting for node "cilium-20210813042828-2022292" to be "Ready" ...
	I0813 04:53:08.974079 2203249 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 04:53:08.986940 2203249 pod_ready.go:78] waiting up to 5m0s for pod "cilium-j4sxg" in "kube-system" namespace to be "Ready" ...
	I0813 04:53:09.045092 2203249 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 04:53:09.102376 2203249 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 04:53:09.605319 2203249 start.go:736] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0813 04:53:09.872719 2203249 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0813 04:53:09.872738 2203249 addons.go:344] enableAddons completed in 1.162012588s
	I0813 04:53:11.003360 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:53:13.037419 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:53:15.501073 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:53:18.001723 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:53:20.500657 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:53:22.500863 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:53:25.000742 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:53:27.002729 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:53:29.500390 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:53:31.508137 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:53:34.001514 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:53:36.500702 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:53:38.501037 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:53:40.501677 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:53:42.511717 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:53:45.001187 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:53:47.502377 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:53:50.000623 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:53:52.001139 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:53:54.005480 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:53:56.500389 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:53:58.501408 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:54:01.001709 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:54:03.500647 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:54:05.500778 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:54:08.001541 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:54:10.504542 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:54:13.001040 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:54:15.001697 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:54:17.500592 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:54:20.001236 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:54:22.001442 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:54:24.505927 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:54:27.000559 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:54:29.500525 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:54:32.000655 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:54:34.001207 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:54:36.500079 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:54:38.500423 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:54:40.500813 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:54:42.610026 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:54:45.000846 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:54:47.001405 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:54:49.500066 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:54:51.500610 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:54:53.500759 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:54:56.001220 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:54:58.500591 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:01.000795 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:03.500242 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:05.500510 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:08.000969 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:10.500766 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:13.000139 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:15.499948 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:17.500244 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:19.500559 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:21.500862 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:24.001054 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:26.001093 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:28.500194 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:31.001255 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:33.500231 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:36.000517 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:38.000601 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:40.004731 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:42.500390 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:45.000806 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:47.000961 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:49.500220 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:51.500491 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:53.501710 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:56.000705 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:58.499874 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:00.500004 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:02.500229 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:05.000506 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:07.001325 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:09.501505 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:12.000744 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:14.500226 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:16.500759 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:19.001212 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:21.001328 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:23.500008 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:26.000765 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:28.499663 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:30.500805 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:33.000367 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:35.000910 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:37.002467 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:39.500198 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:41.500245 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:43.500312 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:45.501312 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:48.001054 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:50.001236 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:52.500053 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:55.001158 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:57.500763 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:00.000822 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:02.001549 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:04.500067 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:07.001524 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:09.008568 2203249 pod_ready.go:102] pod "cilium-j4sxg" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:09.008586 2203249 pod_ready.go:81] duration metric: took 4m0.02161952s waiting for pod "cilium-j4sxg" in "kube-system" namespace to be "Ready" ...
	E0813 04:57:09.008594 2203249 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0813 04:57:09.008601 2203249 pod_ready.go:78] waiting up to 5m0s for pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace to be "Ready" ...
	I0813 04:57:11.019474 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:13.516834 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:15.522178 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:18.017170 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:20.516589 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:22.518030 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:24.544700 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:27.017249 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:29.018304 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:31.516558 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:33.517381 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:35.517478 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:38.017759 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:40.018834 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:42.516598 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:44.517894 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:47.017579 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:49.517149 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:52.019713 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:54.516459 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:57.016889 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:59.017625 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:01.521745 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:04.017128 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:06.017868 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:08.018108 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:10.516425 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:12.517093 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:14.517753 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:17.018226 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:19.019237 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:21.517058 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:23.517121 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:26.017713 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:28.017782 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:30.517333 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:33.017293 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:35.017415 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:37.017817 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:39.525535 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:42.021365 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:44.517013 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:46.517070 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:49.017604 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:51.017904 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:53.516907 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:55.624973 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:58.017909 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:00.517512 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:02.519558 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:05.017124 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:07.018621 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:09.022602 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:11.516767 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:13.516978 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:16.017566 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:18.516952 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:20.517241 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:22.517778 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:25.017475 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:27.018831 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:29.516555 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:31.516780 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:34.017677 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:36.094219 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:38.517857 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:41.017902 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:43.020465 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:45.516628 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:48.025488 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:50.517310 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:52.518331 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:55.017456 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:57.017797 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:59.021638 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:01.025738 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:03.516810 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:05.516901 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:07.517812 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:10.017684 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:12.017875 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:14.018056 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:16.576653 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:19.017045 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:21.018063 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:23.018146 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:25.516598 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:27.516948 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:29.517273 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:32.016995 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:34.017032 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:36.017140 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:38.516569 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:40.517283 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:42.517708 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:45.016855 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:47.517393 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:50.018114 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:52.517956 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:55.017687 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:57.062316 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:59.524206 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 05:01:02.017689 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 05:01:04.517276 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 05:01:07.018643 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 05:01:09.020902 2203249 pod_ready.go:102] pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace has status "Ready":"False"
	I0813 05:01:09.020953 2203249 pod_ready.go:81] duration metric: took 4m0.012343554s waiting for pod "cilium-operator-99d899fb5-hn5ff" in "kube-system" namespace to be "Ready" ...
	E0813 05:01:09.020970 2203249 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0813 05:01:09.020985 2203249 pod_ready.go:38] duration metric: took 8m0.046885619s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 05:01:09.023657 2203249 out.go:177] 
	W0813 05:01:09.023783 2203249 out.go:242] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W0813 05:01:09.023796 2203249 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0813 05:01:09.026736 2203249 out.go:242] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                                │
	│    * If the above advice does not help, please let us know:                                                                                                    │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                                  │
	│                                                                                                                                                                │
	│    * Please attach the following file to the GitHub issue:                                                                                                     │
	│    * - /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                                │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                                │
	│    * If the above advice does not help, please let us know:                                                                                                    │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                                  │
	│                                                                                                                                                                │
	│    * Please attach the following file to the GitHub issue:                                                                                                     │
	│    * - /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                                │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 05:01:09.029094 2203249 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:100: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/cilium/Start (553.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (545.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p calico-20210813042828-2022292 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=containerd
E0813 04:53:12.392786 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813044024-2022292/client.crt: no such file or directory
E0813 04:53:13.936407 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210813043048-2022292/client.crt: no such file or directory
E0813 04:53:29.757817 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813035500-2022292/client.crt: no such file or directory
E0813 04:53:53.352947 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813044024-2022292/client.crt: no such file or directory
E0813 04:55:15.273129 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813044024-2022292/client.crt: no such file or directory
E0813 04:56:43.753730 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210813042827-2022292/client.crt: no such file or directory
E0813 04:56:43.758946 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210813042827-2022292/client.crt: no such file or directory
E0813 04:56:43.769066 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210813042827-2022292/client.crt: no such file or directory
E0813 04:56:43.789294 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210813042827-2022292/client.crt: no such file or directory
E0813 04:56:43.829485 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210813042827-2022292/client.crt: no such file or directory
E0813 04:56:43.909679 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210813042827-2022292/client.crt: no such file or directory
E0813 04:56:44.070049 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210813042827-2022292/client.crt: no such file or directory
E0813 04:56:44.390409 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210813042827-2022292/client.crt: no such file or directory
E0813 04:56:45.030661 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210813042827-2022292/client.crt: no such file or directory
E0813 04:56:46.311230 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210813042827-2022292/client.crt: no such file or directory
E0813 04:56:48.872060 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210813042827-2022292/client.crt: no such file or directory
E0813 04:56:53.992394 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210813042827-2022292/client.crt: no such file or directory
E0813 04:57:01.447380 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.crt: no such file or directory
E0813 04:57:04.232800 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210813042827-2022292/client.crt: no such file or directory
E0813 04:57:24.712981 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210813042827-2022292/client.crt: no such file or directory
E0813 04:57:31.431395 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813044024-2022292/client.crt: no such file or directory
E0813 04:57:58.390655 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813042828-2022292/client.crt: no such file or directory
E0813 04:57:58.395917 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813042828-2022292/client.crt: no such file or directory
E0813 04:57:58.406127 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813042828-2022292/client.crt: no such file or directory
E0813 04:57:58.426328 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813042828-2022292/client.crt: no such file or directory
E0813 04:57:58.466528 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813042828-2022292/client.crt: no such file or directory
E0813 04:57:58.546791 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813042828-2022292/client.crt: no such file or directory
E0813 04:57:58.707135 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813042828-2022292/client.crt: no such file or directory
E0813 04:57:59.027831 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813042828-2022292/client.crt: no such file or directory
E0813 04:57:59.114071 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813044024-2022292/client.crt: no such file or directory
E0813 04:57:59.668682 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813042828-2022292/client.crt: no such file or directory
E0813 04:58:00.948824 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813042828-2022292/client.crt: no such file or directory
E0813 04:58:03.509005 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813042828-2022292/client.crt: no such file or directory
E0813 04:58:05.673899 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210813042827-2022292/client.crt: no such file or directory
E0813 04:58:08.629797 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813042828-2022292/client.crt: no such file or directory
E0813 04:58:08.707040 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210813043133-2022292/client.crt: no such file or directory
E0813 04:58:12.804564 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813035500-2022292/client.crt: no such file or directory
E0813 04:58:13.936872 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210813043048-2022292/client.crt: no such file or directory
E0813 04:58:18.870936 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813042828-2022292/client.crt: no such file or directory
E0813 04:58:29.758881 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813035500-2022292/client.crt: no such file or directory
E0813 04:58:39.351309 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813042828-2022292/client.crt: no such file or directory
E0813 04:59:20.311950 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813042828-2022292/client.crt: no such file or directory
E0813 04:59:27.594362 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210813042827-2022292/client.crt: no such file or directory
E0813 04:59:31.751202 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210813043133-2022292/client.crt: no such file or directory
E0813 04:59:36.979166 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210813043048-2022292/client.crt: no such file or directory
E0813 05:00:42.232406 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813042828-2022292/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:98: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p calico-20210813042828-2022292 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=containerd: exit status 80 (9m5.701713988s)

                                                
                                                
-- stdout --
	* [calico-20210813042828-2022292] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=12230
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node calico-20210813042828-2022292 in cluster calico-20210813042828-2022292
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.21.3 on containerd 1.4.6 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 04:53:10.908317 2208249 out.go:298] Setting OutFile to fd 1 ...
	I0813 04:53:10.908438 2208249 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 04:53:10.908452 2208249 out.go:311] Setting ErrFile to fd 2...
	I0813 04:53:10.908456 2208249 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 04:53:10.908584 2208249 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0813 04:53:10.908845 2208249 out.go:305] Setting JSON to false
	I0813 04:53:10.909941 2208249 start.go:111] hostinfo: {"hostname":"ip-172-31-30-239","uptime":52535,"bootTime":1628777856,"procs":322,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0813 04:53:10.910029 2208249 start.go:121] virtualization:  
	I0813 04:53:10.913365 2208249 out.go:177] * [calico-20210813042828-2022292] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0813 04:53:10.915302 2208249 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 04:53:10.913483 2208249 notify.go:169] Checking for updates...
	I0813 04:53:10.917711 2208249 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 04:53:10.920495 2208249 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0813 04:53:10.922582 2208249 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0813 04:53:10.923152 2208249 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 04:53:10.969036 2208249 docker.go:132] docker version: linux-20.10.8
	I0813 04:53:10.969122 2208249 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 04:53:11.107871 2208249 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:19 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-13 04:53:11.028017224 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0813 04:53:11.107983 2208249 docker.go:244] overlay module found
	I0813 04:53:11.110107 2208249 out.go:177] * Using the docker driver based on user configuration
	I0813 04:53:11.110128 2208249 start.go:278] selected driver: docker
	I0813 04:53:11.110133 2208249 start.go:751] validating driver "docker" against <nil>
	I0813 04:53:11.110148 2208249 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 04:53:11.110189 2208249 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 04:53:11.110207 2208249 out.go:242] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0813 04:53:11.111936 2208249 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 04:53:11.112252 2208249 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 04:53:11.227243 2208249 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:19 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-13 04:53:11.140370024 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0813 04:53:11.227366 2208249 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 04:53:11.227507 2208249 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0813 04:53:11.227522 2208249 cni.go:93] Creating CNI manager for "calico"
	I0813 04:53:11.227530 2208249 start_flags.go:272] Found "Calico" CNI - setting NetworkPlugin=cni
	I0813 04:53:11.227536 2208249 start_flags.go:277] config:
	{Name:calico-20210813042828-2022292 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:calico-20210813042828-2022292 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 04:53:11.230327 2208249 out.go:177] * Starting control plane node calico-20210813042828-2022292 in cluster calico-20210813042828-2022292
	I0813 04:53:11.230443 2208249 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0813 04:53:11.232498 2208249 out.go:177] * Pulling base image ...
	I0813 04:53:11.232525 2208249 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 04:53:11.232561 2208249 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4
	I0813 04:53:11.232574 2208249 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0813 04:53:11.232577 2208249 cache.go:56] Caching tarball of preloaded images
	I0813 04:53:11.232806 2208249 preload.go:173] Found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0813 04:53:11.232831 2208249 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0813 04:53:11.232932 2208249 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813042828-2022292/config.json ...
	I0813 04:53:11.232957 2208249 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813042828-2022292/config.json: {Name:mk52a013b9bc0115e185cd15011c9a5c6736778e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 04:53:11.282046 2208249 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0813 04:53:11.282069 2208249 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0813 04:53:11.282082 2208249 cache.go:205] Successfully downloaded all kic artifacts
	I0813 04:53:11.282108 2208249 start.go:313] acquiring machines lock for calico-20210813042828-2022292: {Name:mk5f3be3b8cf98ffa7b151d29b80ae72d0183357 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0813 04:53:11.282213 2208249 start.go:317] acquired machines lock for "calico-20210813042828-2022292" in 79.45µs
	I0813 04:53:11.282241 2208249 start.go:89] Provisioning new machine with config: &{Name:calico-20210813042828-2022292 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:calico-20210813042828-2022292 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 04:53:11.282315 2208249 start.go:126] createHost starting for "" (driver="docker")
	I0813 04:53:11.286622 2208249 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0813 04:53:11.286847 2208249 start.go:160] libmachine.API.Create for "calico-20210813042828-2022292" (driver="docker")
	I0813 04:53:11.286869 2208249 client.go:168] LocalClient.Create starting
	I0813 04:53:11.286918 2208249 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem
	I0813 04:53:11.286943 2208249 main.go:130] libmachine: Decoding PEM data...
	I0813 04:53:11.286958 2208249 main.go:130] libmachine: Parsing certificate...
	I0813 04:53:11.287082 2208249 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem
	I0813 04:53:11.287097 2208249 main.go:130] libmachine: Decoding PEM data...
	I0813 04:53:11.287109 2208249 main.go:130] libmachine: Parsing certificate...
	I0813 04:53:11.287448 2208249 cli_runner.go:115] Run: docker network inspect calico-20210813042828-2022292 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0813 04:53:11.327165 2208249 cli_runner.go:162] docker network inspect calico-20210813042828-2022292 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0813 04:53:11.327240 2208249 network_create.go:255] running [docker network inspect calico-20210813042828-2022292] to gather additional debugging logs...
	I0813 04:53:11.327260 2208249 cli_runner.go:115] Run: docker network inspect calico-20210813042828-2022292
	W0813 04:53:11.367797 2208249 cli_runner.go:162] docker network inspect calico-20210813042828-2022292 returned with exit code 1
	I0813 04:53:11.367824 2208249 network_create.go:258] error running [docker network inspect calico-20210813042828-2022292]: docker network inspect calico-20210813042828-2022292: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20210813042828-2022292
	I0813 04:53:11.367835 2208249 network_create.go:260] output of [docker network inspect calico-20210813042828-2022292]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20210813042828-2022292
	
	** /stderr **
	I0813 04:53:11.367892 2208249 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 04:53:11.426922 2208249 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-411ae5d4aecb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:04:aa:a1:ca}}
	I0813 04:53:11.427237 2208249 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0x40004cc4c0] misses:0}
	I0813 04:53:11.427274 2208249 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0813 04:53:11.427292 2208249 network_create.go:106] attempt to create docker network calico-20210813042828-2022292 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0813 04:53:11.427341 2208249 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20210813042828-2022292
	I0813 04:53:11.512043 2208249 network_create.go:90] docker network calico-20210813042828-2022292 192.168.58.0/24 created
	I0813 04:53:11.512076 2208249 kic.go:106] calculated static IP "192.168.58.2" for the "calico-20210813042828-2022292" container
	I0813 04:53:11.512138 2208249 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0813 04:53:11.554779 2208249 cli_runner.go:115] Run: docker volume create calico-20210813042828-2022292 --label name.minikube.sigs.k8s.io=calico-20210813042828-2022292 --label created_by.minikube.sigs.k8s.io=true
	I0813 04:53:11.594267 2208249 oci.go:102] Successfully created a docker volume calico-20210813042828-2022292
	I0813 04:53:11.594416 2208249 cli_runner.go:115] Run: docker run --rm --name calico-20210813042828-2022292-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20210813042828-2022292 --entrypoint /usr/bin/test -v calico-20210813042828-2022292:/var gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib
	I0813 04:53:12.391372 2208249 oci.go:106] Successfully prepared a docker volume calico-20210813042828-2022292
	W0813 04:53:12.391430 2208249 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0813 04:53:12.391441 2208249 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0813 04:53:12.391506 2208249 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0813 04:53:12.391520 2208249 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 04:53:12.391542 2208249 kic.go:179] Starting extracting preloaded images to volume ...
	I0813 04:53:12.391597 2208249 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v calico-20210813042828-2022292:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0813 04:53:12.583217 2208249 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20210813042828-2022292 --name calico-20210813042828-2022292 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20210813042828-2022292 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20210813042828-2022292 --network calico-20210813042828-2022292 --ip 192.168.58.2 --volume calico-20210813042828-2022292:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79
	I0813 04:53:13.438895 2208249 cli_runner.go:115] Run: docker container inspect calico-20210813042828-2022292 --format={{.State.Running}}
	I0813 04:53:13.499039 2208249 cli_runner.go:115] Run: docker container inspect calico-20210813042828-2022292 --format={{.State.Status}}
	I0813 04:53:13.563089 2208249 cli_runner.go:115] Run: docker exec calico-20210813042828-2022292 stat /var/lib/dpkg/alternatives/iptables
	I0813 04:53:13.697784 2208249 oci.go:278] the created container "calico-20210813042828-2022292" has a running status.
	I0813 04:53:13.697814 2208249 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/calico-20210813042828-2022292/id_rsa...
	I0813 04:53:15.036469 2208249 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/calico-20210813042828-2022292/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0813 04:53:15.190478 2208249 cli_runner.go:115] Run: docker container inspect calico-20210813042828-2022292 --format={{.State.Status}}
	I0813 04:53:15.250819 2208249 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0813 04:53:15.250837 2208249 kic_runner.go:115] Args: [docker exec --privileged calico-20210813042828-2022292 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0813 04:53:31.335858 2208249 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v calico-20210813042828-2022292:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: (18.944218747s)
	I0813 04:53:31.335891 2208249 kic.go:188] duration metric: took 18.944346 seconds to extract preloaded images to volume
	I0813 04:53:31.335967 2208249 cli_runner.go:115] Run: docker container inspect calico-20210813042828-2022292 --format={{.State.Status}}
	I0813 04:53:31.388497 2208249 machine.go:88] provisioning docker machine ...
	I0813 04:53:31.388531 2208249 ubuntu.go:169] provisioning hostname "calico-20210813042828-2022292"
	I0813 04:53:31.388586 2208249 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210813042828-2022292
	I0813 04:53:31.445002 2208249 main.go:130] libmachine: Using SSH client type: native
	I0813 04:53:31.445184 2208249 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 51011 <nil> <nil>}
	I0813 04:53:31.445202 2208249 main.go:130] libmachine: About to run SSH command:
	sudo hostname calico-20210813042828-2022292 && echo "calico-20210813042828-2022292" | sudo tee /etc/hostname
	I0813 04:53:31.592035 2208249 main.go:130] libmachine: SSH cmd err, output: <nil>: calico-20210813042828-2022292
	
	I0813 04:53:31.592106 2208249 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210813042828-2022292
	I0813 04:53:31.632564 2208249 main.go:130] libmachine: Using SSH client type: native
	I0813 04:53:31.632720 2208249 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x370ba0] 0x370b70 <nil>  [] 0s} 127.0.0.1 51011 <nil> <nil>}
	I0813 04:53:31.632748 2208249 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-20210813042828-2022292' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20210813042828-2022292/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-20210813042828-2022292' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0813 04:53:31.743271 2208249 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0813 04:53:31.743329 2208249 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e6
89d34b/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube}
	I0813 04:53:31.743360 2208249 ubuntu.go:177] setting up certificates
	I0813 04:53:31.743379 2208249 provision.go:83] configureAuth start
	I0813 04:53:31.743452 2208249 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20210813042828-2022292
	I0813 04:53:31.774045 2208249 provision.go:137] copyHostCerts
	I0813 04:53:31.774101 2208249 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem, removing ...
	I0813 04:53:31.774114 2208249 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem
	I0813 04:53:31.774170 2208249 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.pem (1078 bytes)
	I0813 04:53:31.774246 2208249 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem, removing ...
	I0813 04:53:31.774259 2208249 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem
	I0813 04:53:31.774281 2208249 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cert.pem (1123 bytes)
	I0813 04:53:31.774356 2208249 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem, removing ...
	I0813 04:53:31.774367 2208249 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem
	I0813 04:53:31.774389 2208249 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/key.pem (1679 bytes)
	I0813 04:53:31.774429 2208249 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem org=jenkins.calico-20210813042828-2022292 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20210813042828-2022292]
	I0813 04:53:32.047571 2208249 provision.go:171] copyRemoteCerts
	I0813 04:53:32.047625 2208249 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0813 04:53:32.047669 2208249 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210813042828-2022292
	I0813 04:53:32.077661 2208249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51011 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/calico-20210813042828-2022292/id_rsa Username:docker}
	I0813 04:53:32.158813 2208249 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0813 04:53:32.174371 2208249 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
	I0813 04:53:32.189188 2208249 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0813 04:53:32.203858 2208249 provision.go:86] duration metric: configureAuth took 460.44764ms
	I0813 04:53:32.203873 2208249 ubuntu.go:193] setting minikube options for container-runtime
	I0813 04:53:32.204013 2208249 machine.go:91] provisioned docker machine in 815.497146ms
	I0813 04:53:32.204019 2208249 client.go:171] LocalClient.Create took 20.917145285s
	I0813 04:53:32.204029 2208249 start.go:168] duration metric: libmachine.API.Create for "calico-20210813042828-2022292" took 20.917181822s
	I0813 04:53:32.204037 2208249 start.go:267] post-start starting for "calico-20210813042828-2022292" (driver="docker")
	I0813 04:53:32.204041 2208249 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0813 04:53:32.204082 2208249 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0813 04:53:32.204121 2208249 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210813042828-2022292
	I0813 04:53:32.235335 2208249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51011 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/calico-20210813042828-2022292/id_rsa Username:docker}
	I0813 04:53:32.318706 2208249 ssh_runner.go:149] Run: cat /etc/os-release
	I0813 04:53:32.321228 2208249 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0813 04:53:32.321247 2208249 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0813 04:53:32.321262 2208249 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0813 04:53:32.321268 2208249 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0813 04:53:32.321276 2208249 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/addons for local assets ...
	I0813 04:53:32.321317 2208249 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files for local assets ...
	I0813 04:53:32.321390 2208249 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/20222922.pem -> 20222922.pem in /etc/ssl/certs
	I0813 04:53:32.321480 2208249 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0813 04:53:32.327269 2208249 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/20222922.pem --> /etc/ssl/certs/20222922.pem (1708 bytes)
	I0813 04:53:32.342101 2208249 start.go:270] post-start completed in 138.053033ms
	I0813 04:53:32.342384 2208249 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20210813042828-2022292
	I0813 04:53:32.373066 2208249 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813042828-2022292/config.json ...
	I0813 04:53:32.373249 2208249 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 04:53:32.373296 2208249 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210813042828-2022292
	I0813 04:53:32.403282 2208249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51011 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/calico-20210813042828-2022292/id_rsa Username:docker}
	I0813 04:53:32.489280 2208249 start.go:129] duration metric: createHost completed in 21.206954155s
	I0813 04:53:32.489303 2208249 start.go:80] releasing machines lock for "calico-20210813042828-2022292", held for 21.207075795s
	I0813 04:53:32.489374 2208249 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20210813042828-2022292
	I0813 04:53:32.539383 2208249 ssh_runner.go:149] Run: systemctl --version
	I0813 04:53:32.539398 2208249 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0813 04:53:32.539436 2208249 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210813042828-2022292
	I0813 04:53:32.539452 2208249 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210813042828-2022292
	I0813 04:53:32.616168 2208249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51011 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/calico-20210813042828-2022292/id_rsa Username:docker}
	I0813 04:53:32.635905 2208249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51011 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/calico-20210813042828-2022292/id_rsa Username:docker}
	I0813 04:53:32.908246 2208249 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0813 04:53:32.918264 2208249 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0813 04:53:32.926423 2208249 docker.go:153] disabling docker service ...
	I0813 04:53:32.926465 2208249 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0813 04:53:32.942865 2208249 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0813 04:53:32.951029 2208249 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0813 04:53:33.079335 2208249 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0813 04:53:33.173851 2208249 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0813 04:53:33.182384 2208249 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0813 04:53:33.200838 2208249 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY29udGFpbmV
yZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kIgogICAgICB
jb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuc2NoZWR1bGVyXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0813 04:53:33.212602 2208249 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0813 04:53:33.218310 2208249 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0813 04:53:33.223759 2208249 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0813 04:53:33.298654 2208249 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0813 04:53:33.373131 2208249 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0813 04:53:33.373224 2208249 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0813 04:53:33.377648 2208249 start.go:417] Will wait 60s for crictl version
	I0813 04:53:33.377716 2208249 ssh_runner.go:149] Run: sudo crictl version
	I0813 04:53:33.464173 2208249 start.go:426] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.6
	RuntimeApiVersion:  v1alpha2
	I0813 04:53:33.464268 2208249 ssh_runner.go:149] Run: containerd --version
	I0813 04:53:33.491480 2208249 ssh_runner.go:149] Run: containerd --version
	I0813 04:53:33.518222 2208249 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.6 ...
	I0813 04:53:33.518298 2208249 cli_runner.go:115] Run: docker network inspect calico-20210813042828-2022292 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0813 04:53:33.548489 2208249 ssh_runner.go:149] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0813 04:53:33.551229 2208249 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 04:53:33.560463 2208249 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 04:53:33.560523 2208249 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 04:53:33.582689 2208249 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 04:53:33.582709 2208249 containerd.go:517] Images already preloaded, skipping extraction
	I0813 04:53:33.582743 2208249 ssh_runner.go:149] Run: sudo crictl images --output json
	I0813 04:53:33.605422 2208249 containerd.go:613] all images are preloaded for containerd runtime.
	I0813 04:53:33.605440 2208249 cache_images.go:74] Images are preloaded, skipping loading
	I0813 04:53:33.605477 2208249 ssh_runner.go:149] Run: sudo crictl info
	I0813 04:53:33.628725 2208249 cni.go:93] Creating CNI manager for "calico"
	I0813 04:53:33.628750 2208249 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0813 04:53:33.628761 2208249 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-20210813042828-2022292 NodeName:calico-20210813042828-2022292 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFil
e:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0813 04:53:33.628883 2208249 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "calico-20210813042828-2022292"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0813 04:53:33.628963 2208249 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=calico-20210813042828-2022292 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:calico-20210813042828-2022292 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0813 04:53:33.629012 2208249 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0813 04:53:33.636490 2208249 binaries.go:44] Found k8s binaries, skipping transfer
	I0813 04:53:33.636535 2208249 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0813 04:53:33.642077 2208249 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (543 bytes)
	I0813 04:53:33.653142 2208249 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0813 04:53:33.664252 2208249 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2079 bytes)
	I0813 04:53:33.675698 2208249 ssh_runner.go:149] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0813 04:53:33.678207 2208249 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0813 04:53:33.685760 2208249 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813042828-2022292 for IP: 192.168.58.2
	I0813 04:53:33.685798 2208249 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key
	I0813 04:53:33.685811 2208249 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key
	I0813 04:53:33.685855 2208249 certs.go:294] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813042828-2022292/client.key
	I0813 04:53:33.685861 2208249 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813042828-2022292/client.crt with IP's: []
	I0813 04:53:34.064597 2208249 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813042828-2022292/client.crt ...
	I0813 04:53:34.064621 2208249 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813042828-2022292/client.crt: {Name:mkde6ba858976c6e32c13585b4f740aa60cdef02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 04:53:34.064788 2208249 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813042828-2022292/client.key ...
	I0813 04:53:34.064806 2208249 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813042828-2022292/client.key: {Name:mk6d26adc431535619d1c56b3d9a3c20c2516666 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 04:53:34.064898 2208249 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813042828-2022292/apiserver.key.cee25041
	I0813 04:53:34.064910 2208249 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813042828-2022292/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0813 04:53:34.224211 2208249 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813042828-2022292/apiserver.crt.cee25041 ...
	I0813 04:53:34.224235 2208249 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813042828-2022292/apiserver.crt.cee25041: {Name:mk80fd2b2bdb09d00827fcb3c3bb2858325d38e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 04:53:34.224398 2208249 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813042828-2022292/apiserver.key.cee25041 ...
	I0813 04:53:34.224414 2208249 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813042828-2022292/apiserver.key.cee25041: {Name:mk546a830038a471bf523d7a2c71cc6f0cf7b6b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 04:53:34.224511 2208249 certs.go:305] copying /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813042828-2022292/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813042828-2022292/apiserver.crt
	I0813 04:53:34.224570 2208249 certs.go:309] copying /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813042828-2022292/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813042828-2022292/apiserver.key
	I0813 04:53:34.224630 2208249 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813042828-2022292/proxy-client.key
	I0813 04:53:34.224641 2208249 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813042828-2022292/proxy-client.crt with IP's: []
	I0813 04:53:35.030339 2208249 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813042828-2022292/proxy-client.crt ...
	I0813 04:53:35.030367 2208249 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813042828-2022292/proxy-client.crt: {Name:mke43fb1c84ad798234b1671ad899749d38724db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 04:53:35.030546 2208249 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813042828-2022292/proxy-client.key ...
	I0813 04:53:35.030562 2208249 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813042828-2022292/proxy-client.key: {Name:mk40f4177e09bcd8d0b5405414ecbc6ff51ced38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 04:53:35.030734 2208249 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/2022292.pem (1338 bytes)
	W0813 04:53:35.030782 2208249 certs.go:369] ignoring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/2022292_empty.pem, impossibly tiny 0 bytes
	I0813 04:53:35.030808 2208249 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca-key.pem (1675 bytes)
	I0813 04:53:35.030837 2208249 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/ca.pem (1078 bytes)
	I0813 04:53:35.030865 2208249 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/cert.pem (1123 bytes)
	I0813 04:53:35.030890 2208249 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/key.pem (1679 bytes)
	I0813 04:53:35.030935 2208249 certs.go:373] found cert: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/20222922.pem (1708 bytes)
	I0813 04:53:35.031986 2208249 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813042828-2022292/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0813 04:53:35.047432 2208249 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813042828-2022292/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0813 04:53:35.062370 2208249 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813042828-2022292/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0813 04:53:35.077284 2208249 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/calico-20210813042828-2022292/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0813 04:53:35.091952 2208249 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0813 04:53:35.106437 2208249 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0813 04:53:35.121077 2208249 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0813 04:53:35.135509 2208249 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0813 04:53:35.150280 2208249 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/ssl/certs/20222922.pem --> /usr/share/ca-certificates/20222922.pem (1708 bytes)
	I0813 04:53:35.165099 2208249 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0813 04:53:35.181143 2208249 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/certs/2022292.pem --> /usr/share/ca-certificates/2022292.pem (1338 bytes)
	I0813 04:53:35.195638 2208249 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0813 04:53:35.206435 2208249 ssh_runner.go:149] Run: openssl version
	I0813 04:53:35.216135 2208249 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20222922.pem && ln -fs /usr/share/ca-certificates/20222922.pem /etc/ssl/certs/20222922.pem"
	I0813 04:53:35.223856 2208249 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/20222922.pem
	I0813 04:53:35.226619 2208249 certs.go:416] hashing: -rw-r--r-- 1 root root 1708 Aug 13 03:55 /usr/share/ca-certificates/20222922.pem
	I0813 04:53:35.226674 2208249 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20222922.pem
	I0813 04:53:35.230796 2208249 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20222922.pem /etc/ssl/certs/3ec20f2e.0"
	I0813 04:53:35.240537 2208249 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0813 04:53:35.246485 2208249 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0813 04:53:35.249130 2208249 certs.go:416] hashing: -rw-r--r-- 1 root root 1111 Aug 13 03:30 /usr/share/ca-certificates/minikubeCA.pem
	I0813 04:53:35.249189 2208249 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0813 04:53:35.253361 2208249 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0813 04:53:35.259390 2208249 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2022292.pem && ln -fs /usr/share/ca-certificates/2022292.pem /etc/ssl/certs/2022292.pem"
	I0813 04:53:35.265392 2208249 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/2022292.pem
	I0813 04:53:35.267916 2208249 certs.go:416] hashing: -rw-r--r-- 1 root root 1338 Aug 13 03:55 /usr/share/ca-certificates/2022292.pem
	I0813 04:53:35.268011 2208249 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2022292.pem
	I0813 04:53:35.272038 2208249 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2022292.pem /etc/ssl/certs/51391683.0"
	I0813 04:53:35.277951 2208249 kubeadm.go:390] StartCluster: {Name:calico-20210813042828-2022292 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:calico-20210813042828-2022292 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 04:53:35.278026 2208249 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0813 04:53:35.278065 2208249 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0813 04:53:35.301904 2208249 cri.go:76] found id: ""
	I0813 04:53:35.301982 2208249 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0813 04:53:35.307784 2208249 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0813 04:53:35.313562 2208249 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0813 04:53:35.313606 2208249 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0813 04:53:35.319230 2208249 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0813 04:53:35.319266 2208249 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0813 04:53:35.808406 2208249 out.go:204]   - Generating certificates and keys ...
	I0813 04:53:42.630854 2208249 out.go:204]   - Booting up control plane ...
	I0813 04:54:01.707356 2208249 out.go:204]   - Configuring RBAC rules ...
	I0813 04:54:02.125022 2208249 cni.go:93] Creating CNI manager for "calico"
	I0813 04:54:02.135784 2208249 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0813 04:54:02.135948 2208249 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.21.3/kubectl ...
	I0813 04:54:02.135963 2208249 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (202053 bytes)
	I0813 04:54:02.157521 2208249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0813 04:54:03.577233 2208249 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.21.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.419680545s)
	I0813 04:54:03.577273 2208249 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0813 04:54:03.577364 2208249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:54:03.577415 2208249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=dc1c3ca26e9449ce488a773126b8450402c94a19 minikube.k8s.io/name=calico-20210813042828-2022292 minikube.k8s.io/updated_at=2021_08_13T04_54_03_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:54:03.826111 2208249 ops.go:34] apiserver oom_adj: -16
	I0813 04:54:03.826593 2208249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:54:04.535548 2208249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:54:05.035326 2208249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:54:05.535151 2208249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:54:06.035938 2208249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:54:06.535557 2208249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:54:07.035483 2208249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:54:07.535033 2208249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:54:08.035542 2208249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:54:08.535290 2208249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:54:09.035704 2208249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:54:09.535299 2208249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:54:10.035859 2208249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:54:10.535796 2208249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:54:11.035905 2208249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:54:11.534998 2208249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:54:12.035622 2208249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:54:12.535933 2208249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:54:13.035890 2208249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:54:13.535631 2208249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:54:14.035763 2208249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:54:14.535050 2208249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:54:15.035701 2208249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:54:15.535266 2208249 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0813 04:54:15.630335 2208249 kubeadm.go:985] duration metric: took 12.053008064s to wait for elevateKubeSystemPrivileges.
	I0813 04:54:15.630363 2208249 kubeadm.go:392] StartCluster complete in 40.352414925s
	I0813 04:54:15.630379 2208249 settings.go:142] acquiring lock: {Name:mke0b9bf6059169e73bfde24fe8e8162c3ec0654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 04:54:15.630453 2208249 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 04:54:15.631859 2208249 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig: {Name:mk6797826f33680e9cda7cd38a7adfcabda9681c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0813 04:54:16.155564 2208249 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-20210813042828-2022292" rescaled to 1
	I0813 04:54:16.155611 2208249 start.go:226] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0813 04:54:16.158479 2208249 out.go:177] * Verifying Kubernetes components...
	I0813 04:54:16.158536 2208249 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 04:54:16.155710 2208249 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0813 04:54:16.156052 2208249 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0813 04:54:16.158618 2208249 addons.go:59] Setting default-storageclass=true in profile "calico-20210813042828-2022292"
	I0813 04:54:16.158617 2208249 addons.go:59] Setting storage-provisioner=true in profile "calico-20210813042828-2022292"
	I0813 04:54:16.158631 2208249 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-20210813042828-2022292"
	I0813 04:54:16.158638 2208249 addons.go:135] Setting addon storage-provisioner=true in "calico-20210813042828-2022292"
	W0813 04:54:16.158644 2208249 addons.go:147] addon storage-provisioner should already be in state true
	I0813 04:54:16.158666 2208249 host.go:66] Checking if "calico-20210813042828-2022292" exists ...
	I0813 04:54:16.158945 2208249 cli_runner.go:115] Run: docker container inspect calico-20210813042828-2022292 --format={{.State.Status}}
	I0813 04:54:16.159136 2208249 cli_runner.go:115] Run: docker container inspect calico-20210813042828-2022292 --format={{.State.Status}}
	I0813 04:54:16.242159 2208249 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0813 04:54:16.242257 2208249 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 04:54:16.242266 2208249 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0813 04:54:16.242318 2208249 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210813042828-2022292
	I0813 04:54:16.262167 2208249 addons.go:135] Setting addon default-storageclass=true in "calico-20210813042828-2022292"
	W0813 04:54:16.262196 2208249 addons.go:147] addon default-storageclass should already be in state true
	I0813 04:54:16.262220 2208249 host.go:66] Checking if "calico-20210813042828-2022292" exists ...
	I0813 04:54:16.262657 2208249 cli_runner.go:115] Run: docker container inspect calico-20210813042828-2022292 --format={{.State.Status}}
	I0813 04:54:16.357222 2208249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51011 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/calico-20210813042828-2022292/id_rsa Username:docker}
	I0813 04:54:16.372026 2208249 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0813 04:54:16.372041 2208249 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0813 04:54:16.372090 2208249 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210813042828-2022292
	I0813 04:54:16.424554 2208249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51011 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/calico-20210813042828-2022292/id_rsa Username:docker}
	I0813 04:54:16.477604 2208249 node_ready.go:35] waiting up to 5m0s for node "calico-20210813042828-2022292" to be "Ready" ...
	I0813 04:54:16.477965 2208249 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0813 04:54:16.481552 2208249 node_ready.go:49] node "calico-20210813042828-2022292" has status "Ready":"True"
	I0813 04:54:16.481573 2208249 node_ready.go:38] duration metric: took 3.945642ms waiting for node "calico-20210813042828-2022292" to be "Ready" ...
	I0813 04:54:16.481582 2208249 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 04:54:16.481782 2208249 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0813 04:54:16.494563 2208249 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace to be "Ready" ...
	I0813 04:54:16.565182 2208249 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0813 04:54:17.564234 2208249 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.086242165s)
	I0813 04:54:17.564297 2208249 start.go:736] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I0813 04:54:17.564356 2208249 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.082550133s)
	I0813 04:54:17.566629 2208249 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0813 04:54:17.566685 2208249 addons.go:344] enableAddons completed in 1.410638112s
	I0813 04:54:18.521113 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:54:21.021907 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:54:23.522936 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:54:26.021679 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:54:28.021917 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:54:30.021996 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:54:32.525046 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:54:35.022035 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:54:37.022540 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:54:39.522459 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:54:42.020904 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:54:44.021589 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:54:46.022044 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:54:48.521967 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:54:51.021909 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:54:53.521463 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:54:55.523177 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:54:58.021428 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:00.022429 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:02.521460 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:04.522049 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:07.021611 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:09.521901 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:11.522202 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:13.522255 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:16.021236 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:18.021529 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:20.520944 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:22.522378 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:25.021262 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:27.021596 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:29.021645 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:31.021912 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:33.022260 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:35.521966 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:38.021898 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:40.026824 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:42.522123 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:45.020953 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:47.021317 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:49.021608 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:51.521133 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:53.521764 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:55.521979 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:55:58.021336 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:00.021704 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:02.521538 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:05.021559 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:07.022108 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:09.521587 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:12.021558 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:14.021701 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:16.037251 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:18.521216 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:21.021012 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:23.021473 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:25.520858 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:27.521753 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:30.021776 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:32.521145 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:35.021152 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:37.021986 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:39.521465 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:41.521919 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:43.522277 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:46.021746 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:48.522421 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:51.021735 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:53.022194 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:55.522162 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:56:58.021758 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:00.520730 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:02.521825 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:04.522117 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:07.021697 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:09.521632 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:11.522309 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:13.522450 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:16.022832 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:18.521151 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:20.521380 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:23.022049 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:25.521367 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:27.522289 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:30.021730 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:32.522195 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:35.022136 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:37.526079 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:40.021681 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:42.021957 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:44.521220 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:46.521560 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:49.020986 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:51.021790 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:53.022275 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:55.521199 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:57:57.522586 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:00.021496 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:02.022437 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:04.521605 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:07.022408 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:09.521834 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:11.521966 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:14.021943 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:16.521163 2208249 pod_ready.go:102] pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:16.524773 2208249 pod_ready.go:81] duration metric: took 4m0.030177361s waiting for pod "calico-kube-controllers-85ff9ff759-q62x8" in "kube-system" namespace to be "Ready" ...
	E0813 04:58:16.524794 2208249 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0813 04:58:16.524802 2208249 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-cj52p" in "kube-system" namespace to be "Ready" ...
	I0813 04:58:18.534412 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:20.541470 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:23.034937 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:25.534757 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:27.537299 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:30.034484 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:32.034745 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:34.034829 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:36.035015 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:38.035922 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:40.534783 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:42.535039 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:45.034782 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:47.035403 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:49.535079 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:52.034830 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:54.035507 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:56.534343 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 04:58:58.535268 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:01.035638 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:03.036033 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:05.534446 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:07.536445 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:10.034755 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:12.036368 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:14.534739 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:16.535530 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:19.038808 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:21.535175 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:23.535360 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:25.537103 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:28.035614 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:30.036469 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:32.539257 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:35.035590 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:37.035870 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:39.534273 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:41.534623 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:43.535465 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:46.034668 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:48.534586 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:51.034697 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:53.039145 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:55.534664 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 04:59:58.034610 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:00.533871 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:02.535197 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:04.535245 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:07.038097 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:09.534037 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:11.534448 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:13.534515 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:16.034188 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:18.037215 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:20.533894 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:22.535143 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:24.537526 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:27.034647 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:29.035089 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:31.535620 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:34.034404 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:36.034712 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:38.034990 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:40.535237 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:43.035052 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:45.535217 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:47.535327 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:50.034955 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:52.035406 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:54.535153 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:57.036781 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:00:59.535260 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:01:02.034810 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:01:04.535223 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:01:07.035231 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:01:09.047094 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:01:11.539431 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:01:14.036421 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:01:16.036825 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:01:18.097781 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:01:20.535683 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:01:22.536057 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:01:25.035013 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:01:27.534394 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:01:29.534871 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:01:31.541680 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:01:34.034287 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:01:36.034490 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:01:38.034592 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:01:40.037146 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:01:42.534301 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:01:44.534652 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:01:46.535697 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:01:48.535879 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:01:51.035691 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:01:53.535058 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:01:56.034106 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:01:58.034462 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:02:00.036180 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:02:02.535572 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:02:04.673300 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:02:07.039056 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:02:09.536269 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:02:12.034461 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:02:14.035077 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:02:16.535733 2208249 pod_ready.go:102] pod "calico-node-cj52p" in "kube-system" namespace has status "Ready":"False"
	I0813 05:02:16.539904 2208249 pod_ready.go:81] duration metric: took 4m0.015091764s waiting for pod "calico-node-cj52p" in "kube-system" namespace to be "Ready" ...
	E0813 05:02:16.539924 2208249 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0813 05:02:16.539937 2208249 pod_ready.go:38] duration metric: took 8m0.05834316s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0813 05:02:16.541922 2208249 out.go:177] 
	W0813 05:02:16.542046 2208249 out.go:242] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W0813 05:02:16.542060 2208249 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0813 05:02:16.544883 2208249 out.go:242] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                                │
	│    * If the above advice does not help, please let us know:                                                                                                    │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                                  │
	│                                                                                                                                                                │
	│    * Please attach the following file to the GitHub issue:                                                                                                     │
	│    * - /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                                │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                                                                │
	│    * If the above advice does not help, please let us know:                                                                                                    │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                                  │
	│                                                                                                                                                                │
	│    * Please attach the following file to the GitHub issue:                                                                                                     │
	│    * - /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/logs/lastStart.txt    │
	│                                                                                                                                                                │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0813 05:02:16.548147 2208249 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:100: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (545.72s)

                                                
                                    

Test pass (208/252)

Order passed test Duration
3 TestDownloadOnly/v1.14.0/json-events 16.46
4 TestDownloadOnly/v1.14.0/preload-exists 0
8 TestDownloadOnly/v1.14.0/LogsDuration 0.07
10 TestDownloadOnly/v1.21.3/json-events 19.87
11 TestDownloadOnly/v1.21.3/preload-exists 0
15 TestDownloadOnly/v1.21.3/LogsDuration 0.63
17 TestDownloadOnly/v1.22.0-rc.0/json-events 27.11
18 TestDownloadOnly/v1.22.0-rc.0/preload-exists 0
22 TestDownloadOnly/v1.22.0-rc.0/LogsDuration 0.08
23 TestDownloadOnly/DeleteAll 0.33
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.2
31 TestAddons/parallel/MetricsServer 5.79
34 TestAddons/parallel/CSI 389.22
35 TestAddons/parallel/GCPAuth 36.96
36 TestCertOptions 84.61
38 TestForceSystemdFlag 100.66
39 TestForceSystemdEnv 66.57
44 TestErrorSpam/setup 62.54
45 TestErrorSpam/start 0.88
46 TestErrorSpam/status 0.93
47 TestErrorSpam/pause 5.36
48 TestErrorSpam/unpause 1.37
49 TestErrorSpam/stop 15.14
52 TestFunctional/serial/CopySyncFile 0
53 TestFunctional/serial/StartWithProxy 124.93
54 TestFunctional/serial/AuditLog 0
55 TestFunctional/serial/SoftStart 15.79
56 TestFunctional/serial/KubeContext 0.06
57 TestFunctional/serial/KubectlGetPods 0.31
60 TestFunctional/serial/CacheCmd/cache/add_remote 5.84
61 TestFunctional/serial/CacheCmd/cache/add_local 1.12
62 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.06
63 TestFunctional/serial/CacheCmd/cache/list 0.06
64 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
65 TestFunctional/serial/CacheCmd/cache/cache_reload 2.41
66 TestFunctional/serial/CacheCmd/cache/delete 0.13
67 TestFunctional/serial/MinikubeKubectlCmd 0.44
68 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
69 TestFunctional/serial/ExtraConfig 49.15
70 TestFunctional/serial/ComponentHealth 0.1
71 TestFunctional/serial/LogsCmd 1.18
72 TestFunctional/serial/LogsFileCmd 1.11
74 TestFunctional/parallel/ConfigCmd 0.41
75 TestFunctional/parallel/DashboardCmd 2.66
76 TestFunctional/parallel/DryRun 0.53
77 TestFunctional/parallel/InternationalLanguage 0.21
78 TestFunctional/parallel/StatusCmd 0.93
81 TestFunctional/parallel/ServiceCmd 11.62
82 TestFunctional/parallel/AddonsCmd 0.16
85 TestFunctional/parallel/SSHCmd 0.57
86 TestFunctional/parallel/CpCmd 0.56
88 TestFunctional/parallel/FileSync 0.33
89 TestFunctional/parallel/CertSync 2
93 TestFunctional/parallel/NodeLabels 0.08
94 TestFunctional/parallel/LoadImage 1.66
95 TestFunctional/parallel/RemoveImage 2.08
96 TestFunctional/parallel/LoadImageFromFile 1.19
97 TestFunctional/parallel/BuildImage 3.65
98 TestFunctional/parallel/ListImages 0.3
99 TestFunctional/parallel/NonActiveRuntimeDisabled 0.69
101 TestFunctional/parallel/Version/short 0.07
102 TestFunctional/parallel/Version/components 1.27
103 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
104 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
105 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
107 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
109 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
110 TestFunctional/parallel/ProfileCmd/profile_list 0.35
111 TestFunctional/parallel/ProfileCmd/profile_json_output 0.36
117 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
118 TestFunctional/parallel/MountCmd/specific-port 1.62
119 TestFunctional/delete_busybox_image 0.07
120 TestFunctional/delete_my-image_image 0.04
121 TestFunctional/delete_minikube_cached_images 0.03
125 TestJSONOutput/start/Audit 0
127 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
128 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
130 TestJSONOutput/pause/Audit 0
132 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
133 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
135 TestJSONOutput/unpause/Audit 0
137 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
138 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
140 TestJSONOutput/stop/Audit 0
142 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
143 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
144 TestErrorJSONOutput 0.27
146 TestKicCustomNetwork/create_custom_network 56.27
147 TestKicCustomNetwork/use_default_bridge_network 44.27
148 TestKicExistingNetwork 45.68
149 TestMainNoArgs 0.06
152 TestMultiNode/serial/FreshStart2Nodes 130.31
153 TestMultiNode/serial/DeployApp2Nodes 4.76
154 TestMultiNode/serial/PingHostFrom2Pods 1.08
155 TestMultiNode/serial/AddNode 42.23
156 TestMultiNode/serial/ProfileList 0.3
157 TestMultiNode/serial/CopyFile 2.33
158 TestMultiNode/serial/StopNode 21.14
159 TestMultiNode/serial/StartAfterStop 30.24
160 TestMultiNode/serial/RestartKeepsNodes 188.02
161 TestMultiNode/serial/DeleteNode 24.13
162 TestMultiNode/serial/StopMultiNode 40.28
163 TestMultiNode/serial/RestartMultiNode 90.49
164 TestMultiNode/serial/ValidateNameConflict 55.79
170 TestDebPackageInstall/install_arm64_debian:sid/minikube 0
171 TestDebPackageInstall/install_arm64_debian:sid/kvm2-driver 12.28
173 TestDebPackageInstall/install_arm64_debian:latest/minikube 0
174 TestDebPackageInstall/install_arm64_debian:latest/kvm2-driver 10.67
176 TestDebPackageInstall/install_arm64_debian:10/minikube 0
177 TestDebPackageInstall/install_arm64_debian:10/kvm2-driver 10.16
179 TestDebPackageInstall/install_arm64_debian:9/minikube 0
180 TestDebPackageInstall/install_arm64_debian:9/kvm2-driver 9
182 TestDebPackageInstall/install_arm64_ubuntu:latest/minikube 0
183 TestDebPackageInstall/install_arm64_ubuntu:latest/kvm2-driver 13.57
185 TestDebPackageInstall/install_arm64_ubuntu:20.10/minikube 0
186 TestDebPackageInstall/install_arm64_ubuntu:20.10/kvm2-driver 12.86
188 TestDebPackageInstall/install_arm64_ubuntu:20.04/minikube 0
189 TestDebPackageInstall/install_arm64_ubuntu:20.04/kvm2-driver 13.27
191 TestDebPackageInstall/install_arm64_ubuntu:18.04/minikube 0
192 TestDebPackageInstall/install_arm64_ubuntu:18.04/kvm2-driver 11.56
198 TestInsufficientStorage 22.37
201 TestKubernetesUpgrade 257.2
204 TestPause/serial/Start 100.3
205 TestPause/serial/SecondStartNoReconfiguration 6.46
206 TestPause/serial/Pause 0.65
207 TestPause/serial/VerifyStatus 0.29
208 TestPause/serial/Unpause 0.57
209 TestPause/serial/PauseAgain 8.74
210 TestPause/serial/DeletePaused 3.48
211 TestPause/serial/VerifyDeletedResources 0.47
226 TestNetworkPlugins/group/false 0.86
231 TestStartStop/group/old-k8s-version/serial/FirstStart 144.45
233 TestStartStop/group/no-preload/serial/FirstStart 94.3
234 TestStartStop/group/no-preload/serial/DeployApp 8.71
235 TestStartStop/group/old-k8s-version/serial/DeployApp 8.69
236 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.04
237 TestStartStop/group/no-preload/serial/Stop 20.23
238 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.77
239 TestStartStop/group/old-k8s-version/serial/Stop 20.24
240 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
241 TestStartStop/group/no-preload/serial/SecondStart 368.45
242 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
243 TestStartStop/group/old-k8s-version/serial/SecondStart 372.18
244 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.02
245 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
246 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
247 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
248 TestStartStop/group/no-preload/serial/Pause 2.54
249 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.24
251 TestStartStop/group/embed-certs/serial/FirstStart 128.89
252 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.55
255 TestStartStop/group/default-k8s-different-port/serial/FirstStart 126.63
256 TestStartStop/group/embed-certs/serial/DeployApp 8.52
257 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1
258 TestStartStop/group/embed-certs/serial/Stop 20.39
259 TestStartStop/group/default-k8s-different-port/serial/DeployApp 7.76
260 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.94
261 TestStartStop/group/default-k8s-different-port/serial/Stop 20.21
262 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
263 TestStartStop/group/embed-certs/serial/SecondStart 378.2
264 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.18
265 TestStartStop/group/default-k8s-different-port/serial/SecondStart 347.02
266 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 5.07
267 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 5.16
268 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.36
269 TestStartStop/group/default-k8s-different-port/serial/Pause 2.93
270 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.03
272 TestStartStop/group/newest-cni/serial/FirstStart 74.71
273 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 8.61
274 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.68
275 TestStartStop/group/embed-certs/serial/Pause 2.92
276 TestNetworkPlugins/group/auto/Start 140.23
277 TestStartStop/group/newest-cni/serial/DeployApp 0
278 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.52
279 TestStartStop/group/newest-cni/serial/Stop 20.62
280 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
281 TestStartStop/group/newest-cni/serial/SecondStart 37.85
282 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
283 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
284 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.43
285 TestStartStop/group/newest-cni/serial/Pause 2.46
286 TestNetworkPlugins/group/custom-weave/Start 92.77
287 TestNetworkPlugins/group/auto/KubeletFlags 0.27
288 TestNetworkPlugins/group/auto/NetCatPod 8.38
289 TestNetworkPlugins/group/auto/DNS 0.21
290 TestNetworkPlugins/group/auto/Localhost 0.19
291 TestNetworkPlugins/group/auto/HairPin 0.19
293 TestNetworkPlugins/group/custom-weave/KubeletFlags 0.3
294 TestNetworkPlugins/group/custom-weave/NetCatPod 9.63
296 TestNetworkPlugins/group/enable-default-cni/Start 144.66
297 TestNetworkPlugins/group/kindnet/Start 124.67
298 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.26
299 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.54
300 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
301 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
302 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
303 TestNetworkPlugins/group/bridge/Start 106.67
304 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
305 TestNetworkPlugins/group/kindnet/KubeletFlags 0.26
306 TestNetworkPlugins/group/kindnet/NetCatPod 9.57
307 TestNetworkPlugins/group/kindnet/DNS 0.29
308 TestNetworkPlugins/group/kindnet/Localhost 0.3
309 TestNetworkPlugins/group/kindnet/HairPin 0.27
310 TestNetworkPlugins/group/bridge/KubeletFlags 0.27
311 TestNetworkPlugins/group/bridge/NetCatPod 9.38
312 TestNetworkPlugins/group/bridge/DNS 0.19
313 TestNetworkPlugins/group/bridge/Localhost 0.16
314 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.14.0/json-events (16.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-20210813032822-2022292 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-20210813032822-2022292 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (16.456469679s)
--- PASS: TestDownloadOnly/v1.14.0/json-events (16.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/preload-exists
--- PASS: TestDownloadOnly/v1.14.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-20210813032822-2022292
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-20210813032822-2022292: exit status 85 (73.544591ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 03:28:22
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.16.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 03:28:22.094564 2022297 out.go:298] Setting OutFile to fd 1 ...
	I0813 03:28:22.094655 2022297 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 03:28:22.094665 2022297 out.go:311] Setting ErrFile to fd 2...
	I0813 03:28:22.094669 2022297 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 03:28:22.094794 2022297 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	W0813 03:28:22.094908 2022297 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/config/config.json: no such file or directory
	I0813 03:28:22.095127 2022297 out.go:305] Setting JSON to true
	I0813 03:28:22.096070 2022297 start.go:111] hostinfo: {"hostname":"ip-172-31-30-239","uptime":47446,"bootTime":1628777856,"procs":372,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0813 03:28:22.096155 2022297 start.go:121] virtualization:  
	I0813 03:28:22.099301 2022297 notify.go:169] Checking for updates...
	I0813 03:28:22.101763 2022297 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 03:28:22.138712 2022297 docker.go:132] docker version: linux-20.10.8
	I0813 03:28:22.138817 2022297 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 03:28:22.247122 2022297 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:19 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-13 03:28:22.190515107 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0813 03:28:22.247255 2022297 docker.go:244] overlay module found
	I0813 03:28:22.249320 2022297 start.go:278] selected driver: docker
	I0813 03:28:22.249338 2022297 start.go:751] validating driver "docker" against <nil>
	I0813 03:28:22.249465 2022297 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 03:28:22.329866 2022297 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:19 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-13 03:28:22.276200674 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0813 03:28:22.329983 2022297 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0813 03:28:22.330262 2022297 start_flags.go:344] Using suggested 2200MB memory alloc based on sys=7845MB, container=7845MB
	I0813 03:28:22.330358 2022297 start_flags.go:679] Wait components to verify : map[apiserver:true system_pods:true]
	I0813 03:28:22.330375 2022297 cni.go:93] Creating CNI manager for ""
	I0813 03:28:22.330382 2022297 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 03:28:22.330390 2022297 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0813 03:28:22.330395 2022297 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0813 03:28:22.330400 2022297 start_flags.go:272] Found "CNI" CNI - setting NetworkPlugin=cni
	I0813 03:28:22.330410 2022297 start_flags.go:277] config:
	{Name:download-only-20210813032822-2022292 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20210813032822-2022292 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:co
ntainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 03:28:22.332854 2022297 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0813 03:28:22.334811 2022297 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime containerd
	I0813 03:28:22.334903 2022297 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0813 03:28:22.369210 2022297 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0813 03:28:22.369241 2022297 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0813 03:28:22.416187 2022297 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-arm64.tar.lz4
	I0813 03:28:22.416210 2022297 cache.go:56] Caching tarball of preloaded images
	I0813 03:28:22.416439 2022297 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime containerd
	I0813 03:28:22.418597 2022297 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-arm64.tar.lz4 ...
	I0813 03:28:22.535936 2022297 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:351eb6ada75b71a92acbf8ac88056f65 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-arm64.tar.lz4
	I0813 03:28:35.275595 2022297 preload.go:247] saving checksum for preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-arm64.tar.lz4 ...
	I0813 03:28:35.275669 2022297 preload.go:254] verifying checksumm of /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-arm64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210813032822-2022292"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.14.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/json-events (19.87s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-20210813032822-2022292 --force --alsologtostderr --kubernetes-version=v1.21.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-20210813032822-2022292 --force --alsologtostderr --kubernetes-version=v1.21.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (19.865063667s)
--- PASS: TestDownloadOnly/v1.21.3/json-events (19.87s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/preload-exists
--- PASS: TestDownloadOnly/v1.21.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/LogsDuration (0.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-20210813032822-2022292
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-20210813032822-2022292: exit status 85 (631.906426ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 03:28:38
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.16.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 03:28:38.626024 2022383 out.go:298] Setting OutFile to fd 1 ...
	I0813 03:28:38.626111 2022383 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 03:28:38.626122 2022383 out.go:311] Setting ErrFile to fd 2...
	I0813 03:28:38.626126 2022383 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 03:28:38.626260 2022383 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	W0813 03:28:38.626380 2022383 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/config/config.json: no such file or directory
	I0813 03:28:38.626512 2022383 out.go:305] Setting JSON to true
	I0813 03:28:38.627415 2022383 start.go:111] hostinfo: {"hostname":"ip-172-31-30-239","uptime":47462,"bootTime":1628777856,"procs":373,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0813 03:28:38.627483 2022383 start.go:121] virtualization:  
	I0813 03:28:38.630200 2022383 notify.go:169] Checking for updates...
	W0813 03:28:38.632704 2022383 start.go:659] api.Load failed for download-only-20210813032822-2022292: filestore "download-only-20210813032822-2022292": Docker machine "download-only-20210813032822-2022292" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0813 03:28:38.632758 2022383 driver.go:335] Setting default libvirt URI to qemu:///system
	W0813 03:28:38.632783 2022383 start.go:659] api.Load failed for download-only-20210813032822-2022292: filestore "download-only-20210813032822-2022292": Docker machine "download-only-20210813032822-2022292" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0813 03:28:38.668610 2022383 docker.go:132] docker version: linux-20.10.8
	I0813 03:28:38.668701 2022383 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 03:28:38.771614 2022383 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:19 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-13 03:28:38.712369842 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0813 03:28:38.771716 2022383 docker.go:244] overlay module found
	I0813 03:28:38.774162 2022383 start.go:278] selected driver: docker
	I0813 03:28:38.774180 2022383 start.go:751] validating driver "docker" against &{Name:download-only-20210813032822-2022292 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20210813032822-2022292 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 03:28:38.774345 2022383 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 03:28:38.849527 2022383 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:19 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-13 03:28:38.799433643 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0813 03:28:38.849856 2022383 cni.go:93] Creating CNI manager for ""
	I0813 03:28:38.849874 2022383 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 03:28:38.849884 2022383 start_flags.go:277] config:
	{Name:download-only-20210813032822-2022292 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:download-only-20210813032822-2022292 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:co
ntainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 03:28:38.851818 2022383 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0813 03:28:38.853687 2022383 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 03:28:38.853785 2022383 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0813 03:28:38.897093 2022383 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0813 03:28:38.897127 2022383 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0813 03:28:38.925025 2022383 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4
	I0813 03:28:38.925051 2022383 cache.go:56] Caching tarball of preloaded images
	I0813 03:28:38.925261 2022383 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0813 03:28:38.927288 2022383 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4 ...
	I0813 03:28:39.052111 2022383 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4?checksum=md5:9d640646cc20893f4eeb92367d325250 -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210813032822-2022292"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.21.3/LogsDuration (0.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/json-events (27.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-20210813032822-2022292 --force --alsologtostderr --kubernetes-version=v1.22.0-rc.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-20210813032822-2022292 --force --alsologtostderr --kubernetes-version=v1.22.0-rc.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (27.108135213s)
--- PASS: TestDownloadOnly/v1.22.0-rc.0/json-events (27.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.22.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-20210813032822-2022292
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-20210813032822-2022292: exit status 85 (77.545647ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/13 03:28:59
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.16.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0813 03:28:59.126581 2022467 out.go:298] Setting OutFile to fd 1 ...
	I0813 03:28:59.127018 2022467 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 03:28:59.127030 2022467 out.go:311] Setting ErrFile to fd 2...
	I0813 03:28:59.127034 2022467 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 03:28:59.127277 2022467 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	W0813 03:28:59.127451 2022467 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/config/config.json: no such file or directory
	I0813 03:28:59.127611 2022467 out.go:305] Setting JSON to true
	I0813 03:28:59.128598 2022467 start.go:111] hostinfo: {"hostname":"ip-172-31-30-239","uptime":47483,"bootTime":1628777856,"procs":373,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0813 03:28:59.128867 2022467 start.go:121] virtualization:  
	I0813 03:28:59.195347 2022467 notify.go:169] Checking for updates...
	W0813 03:28:59.256195 2022467 start.go:659] api.Load failed for download-only-20210813032822-2022292: filestore "download-only-20210813032822-2022292": Docker machine "download-only-20210813032822-2022292" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0813 03:28:59.256251 2022467 driver.go:335] Setting default libvirt URI to qemu:///system
	W0813 03:28:59.256276 2022467 start.go:659] api.Load failed for download-only-20210813032822-2022292: filestore "download-only-20210813032822-2022292": Docker machine "download-only-20210813032822-2022292" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0813 03:28:59.308167 2022467 docker.go:132] docker version: linux-20.10.8
	I0813 03:28:59.308251 2022467 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 03:28:59.387469 2022467 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:19 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-13 03:28:59.334615981 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0813 03:28:59.387585 2022467 docker.go:244] overlay module found
	I0813 03:28:59.444781 2022467 start.go:278] selected driver: docker
	I0813 03:28:59.444802 2022467 start.go:751] validating driver "docker" against &{Name:download-only-20210813032822-2022292 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:download-only-20210813032822-2022292 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 03:28:59.445011 2022467 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 03:28:59.522585 2022467 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:19 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-13 03:28:59.470498837 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0813 03:28:59.522913 2022467 cni.go:93] Creating CNI manager for ""
	I0813 03:28:59.522933 2022467 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0813 03:28:59.522944 2022467 start_flags.go:277] config:
	{Name:download-only-20210813032822-2022292 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:download-only-20210813032822-2022292 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 03:28:59.633486 2022467 cache.go:117] Beginning downloading kic base image for docker with containerd
	I0813 03:28:59.694935 2022467 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0813 03:28:59.694950 2022467 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon
	I0813 03:28:59.727944 2022467 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull
	I0813 03:28:59.727969 2022467 cache.go:139] gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load
	I0813 03:28:59.758547 2022467 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-arm64.tar.lz4
	I0813 03:28:59.758567 2022467 cache.go:56] Caching tarball of preloaded images
	I0813 03:28:59.758761 2022467 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0813 03:28:59.820418 2022467 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-arm64.tar.lz4 ...
	I0813 03:28:59.934989 2022467 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:54a0a9839942448749353ea5722c4adc -> /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-arm64.tar.lz4
	I0813 03:29:22.662707 2022467 preload.go:247] saving checksum for preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-arm64.tar.lz4 ...
	I0813 03:29:22.662790 2022467 preload.go:254] verifying checksumm of /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-arm64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210813032822-2022292"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.22.0-rc.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.33s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:189: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.33s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:201: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-20210813032822-2022292
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.20s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.79s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:361: metrics-server stabilized in 3.321105ms
addons_test.go:363: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:343: "metrics-server-77c99ccb96-vn6tn" [985bccb5-7c0b-4df0-91ce-0cd5e67a9688] Running
addons_test.go:363: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.014286458s
addons_test.go:369: (dbg) Run:  kubectl --context addons-20210813032940-2022292 top pods -n kube-system
addons_test.go:374: kubectl --context addons-20210813032940-2022292 top pods -n kube-system: unexpected stderr: W0813 03:48:58.419311 2042414 top_pod.go:140] Using json format to get metrics. Next release will switch to protocol-buffers, switch early by passing --use-protocol-buffers flag
addons_test.go:386: (dbg) Run:  out/minikube-linux-arm64 -p addons-20210813032940-2022292 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.79s)

                                                
                                    
x
+
TestAddons/parallel/CSI (389.22s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:526: csi-hostpath-driver pods stabilized in 20.236718ms
addons_test.go:529: (dbg) Run:  kubectl --context addons-20210813032940-2022292 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210813032940-2022292 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:539: (dbg) Run:  kubectl --context addons-20210813032940-2022292 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:343: "task-pv-pod" [f177c647-5df7-42e2-9a41-5c2f01f61e30] Pending
helpers_test.go:343: "task-pv-pod" [f177c647-5df7-42e2-9a41-5c2f01f61e30] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod" [f177c647-5df7-42e2-9a41-5c2f01f61e30] Running
addons_test.go:544: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 5m51.027280095s
addons_test.go:549: (dbg) Run:  kubectl --context addons-20210813032940-2022292 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:554: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20210813032940-2022292 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20210813032940-2022292 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:559: (dbg) Run:  kubectl --context addons-20210813032940-2022292 delete pod task-pv-pod
addons_test.go:559: (dbg) Done: kubectl --context addons-20210813032940-2022292 delete pod task-pv-pod: (2.077482318s)
addons_test.go:565: (dbg) Run:  kubectl --context addons-20210813032940-2022292 delete pvc hpvc
addons_test.go:571: (dbg) Run:  kubectl --context addons-20210813032940-2022292 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:576: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210813032940-2022292 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:581: (dbg) Run:  kubectl --context addons-20210813032940-2022292 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:586: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:343: "task-pv-pod-restore" [5924e29c-e99e-4157-9ae0-be1cc04f121f] Pending
helpers_test.go:343: "task-pv-pod-restore" [5924e29c-e99e-4157-9ae0-be1cc04f121f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:343: "task-pv-pod-restore" [5924e29c-e99e-4157-9ae0-be1cc04f121f] Running
addons_test.go:586: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 24.009995895s
addons_test.go:591: (dbg) Run:  kubectl --context addons-20210813032940-2022292 delete pod task-pv-pod-restore
addons_test.go:591: (dbg) Done: kubectl --context addons-20210813032940-2022292 delete pod task-pv-pod-restore: (1.356491082s)
addons_test.go:595: (dbg) Run:  kubectl --context addons-20210813032940-2022292 delete pvc hpvc-restore
addons_test.go:599: (dbg) Run:  kubectl --context addons-20210813032940-2022292 delete volumesnapshot new-snapshot-demo
addons_test.go:603: (dbg) Run:  out/minikube-linux-arm64 -p addons-20210813032940-2022292 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:603: (dbg) Done: out/minikube-linux-arm64 -p addons-20210813032940-2022292 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.795961429s)
addons_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p addons-20210813032940-2022292 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (389.22s)

                                                
                                    
x
+
TestAddons/parallel/GCPAuth (36.96s)

                                                
                                                
=== RUN   TestAddons/parallel/GCPAuth
=== PAUSE TestAddons/parallel/GCPAuth

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:618: (dbg) Run:  kubectl --context addons-20210813032940-2022292 create -f testdata/busybox.yaml
addons_test.go:624: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [2254ae24-28a8-4d2c-9259-7f13daae77b6] Pending
helpers_test.go:343: "busybox" [2254ae24-28a8-4d2c-9259-7f13daae77b6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [2254ae24-28a8-4d2c-9259-7f13daae77b6] Running
addons_test.go:624: (dbg) TestAddons/parallel/GCPAuth: integration-test=busybox healthy within 9.011320116s
addons_test.go:630: (dbg) Run:  kubectl --context addons-20210813032940-2022292 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:643: (dbg) Run:  kubectl --context addons-20210813032940-2022292 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:667: (dbg) Run:  kubectl --context addons-20210813032940-2022292 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:709: (dbg) Run:  out/minikube-linux-arm64 -p addons-20210813032940-2022292 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:709: (dbg) Done: out/minikube-linux-arm64 -p addons-20210813032940-2022292 addons disable gcp-auth --alsologtostderr -v=1: (27.065756303s)
--- PASS: TestAddons/parallel/GCPAuth (36.96s)

                                                
                                    
x
+
TestCertOptions (84.61s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:47: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-20210813043009-2022292 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:47: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-20210813043009-2022292 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (1m21.042035526s)
cert_options_test.go:58: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-20210813043009-2022292 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:73: (dbg) Run:  kubectl --context cert-options-20210813043009-2022292 config view
helpers_test.go:176: Cleaning up "cert-options-20210813043009-2022292" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-20210813043009-2022292
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-20210813043009-2022292: (2.994395185s)
--- PASS: TestCertOptions (84.61s)

                                                
                                    
x
+
TestForceSystemdFlag (100.66s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-20210813042828-2022292 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0813 04:28:29.758226 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813035500-2022292/client.crt: no such file or directory

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-20210813042828-2022292 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m33.855638223s)
docker_test.go:113: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-20210813042828-2022292 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-flag-20210813042828-2022292" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-20210813042828-2022292
E0813 04:30:04.505559 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.crt: no such file or directory
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-20210813042828-2022292: (6.453544377s)
--- PASS: TestForceSystemdFlag (100.66s)

                                                
                                    
x
+
TestForceSystemdEnv (66.57s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-20210813042720-2022292 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-20210813042720-2022292 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m3.74310541s)
docker_test.go:113: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-20210813042720-2022292 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-env-20210813042720-2022292" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-20210813042720-2022292
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-20210813042720-2022292: (2.553222457s)
--- PASS: TestForceSystemdEnv (66.57s)

                                                
                                    
x
+
TestErrorSpam/setup (62.54s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-20210813035328-2022292 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20210813035328-2022292 --driver=docker  --container-runtime=containerd
error_spam_test.go:78: (dbg) Done: out/minikube-linux-arm64 start -p nospam-20210813035328-2022292 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20210813035328-2022292 --driver=docker  --container-runtime=containerd: (1m2.542858723s)
error_spam_test.go:88: acceptable stderr: "! Your cgroup does not allow setting memory."
--- PASS: TestErrorSpam/setup (62.54s)

                                                
                                    
x
+
TestErrorSpam/start (0.88s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210813035328-2022292 --log_dir /tmp/nospam-20210813035328-2022292 start --dry-run
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210813035328-2022292 --log_dir /tmp/nospam-20210813035328-2022292 start --dry-run
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210813035328-2022292 --log_dir /tmp/nospam-20210813035328-2022292 start --dry-run
--- PASS: TestErrorSpam/start (0.88s)

                                                
                                    
x
+
TestErrorSpam/status (0.93s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210813035328-2022292 --log_dir /tmp/nospam-20210813035328-2022292 status
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210813035328-2022292 --log_dir /tmp/nospam-20210813035328-2022292 status
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210813035328-2022292 --log_dir /tmp/nospam-20210813035328-2022292 status
--- PASS: TestErrorSpam/status (0.93s)

                                                
                                    
x
+
TestErrorSpam/pause (5.36s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210813035328-2022292 --log_dir /tmp/nospam-20210813035328-2022292 pause
error_spam_test.go:156: (dbg) Done: out/minikube-linux-arm64 -p nospam-20210813035328-2022292 --log_dir /tmp/nospam-20210813035328-2022292 pause: (4.5015499s)
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210813035328-2022292 --log_dir /tmp/nospam-20210813035328-2022292 pause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210813035328-2022292 --log_dir /tmp/nospam-20210813035328-2022292 pause
--- PASS: TestErrorSpam/pause (5.36s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.37s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210813035328-2022292 --log_dir /tmp/nospam-20210813035328-2022292 unpause
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210813035328-2022292 --log_dir /tmp/nospam-20210813035328-2022292 unpause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210813035328-2022292 --log_dir /tmp/nospam-20210813035328-2022292 unpause
--- PASS: TestErrorSpam/unpause (1.37s)

                                                
                                    
x
+
TestErrorSpam/stop (15.14s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210813035328-2022292 --log_dir /tmp/nospam-20210813035328-2022292 stop
error_spam_test.go:156: (dbg) Done: out/minikube-linux-arm64 -p nospam-20210813035328-2022292 --log_dir /tmp/nospam-20210813035328-2022292 stop: (14.892694456s)
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210813035328-2022292 --log_dir /tmp/nospam-20210813035328-2022292 stop
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-arm64 -p nospam-20210813035328-2022292 --log_dir /tmp/nospam-20210813035328-2022292 stop
--- PASS: TestErrorSpam/stop (15.14s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1606: local sync path: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/files/etc/test/nested/copy/2022292/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (124.93s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:1982: (dbg) Run:  out/minikube-linux-arm64 start -p functional-20210813035500-2022292 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E0813 03:57:01.447411 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.crt: no such file or directory
E0813 03:57:01.453049 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.crt: no such file or directory
E0813 03:57:01.463326 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.crt: no such file or directory
E0813 03:57:01.483524 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.crt: no such file or directory
E0813 03:57:01.523726 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.crt: no such file or directory
E0813 03:57:01.603951 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.crt: no such file or directory
E0813 03:57:01.764352 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.crt: no such file or directory
E0813 03:57:02.085057 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.crt: no such file or directory
E0813 03:57:02.725489 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.crt: no such file or directory
E0813 03:57:04.006547 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.crt: no such file or directory
functional_test.go:1982: (dbg) Done: out/minikube-linux-arm64 start -p functional-20210813035500-2022292 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (2m4.928528607s)
--- PASS: TestFunctional/serial/StartWithProxy (124.93s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (15.79s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:627: (dbg) Run:  out/minikube-linux-arm64 start -p functional-20210813035500-2022292 --alsologtostderr -v=8
E0813 03:57:06.566940 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.crt: no such file or directory
E0813 03:57:11.687906 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.crt: no such file or directory
functional_test.go:627: (dbg) Done: out/minikube-linux-arm64 start -p functional-20210813035500-2022292 --alsologtostderr -v=8: (15.794198201s)
functional_test.go:631: soft start took 15.794665268s for "functional-20210813035500-2022292" cluster.
--- PASS: TestFunctional/serial/SoftStart (15.79s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:647: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:660: (dbg) Run:  kubectl --context functional-20210813035500-2022292 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (5.84s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:982: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 cache add k8s.gcr.io/pause:3.1
E0813 03:57:21.928100 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.crt: no such file or directory
functional_test.go:982: (dbg) Done: out/minikube-linux-arm64 -p functional-20210813035500-2022292 cache add k8s.gcr.io/pause:3.1: (2.105151653s)
functional_test.go:982: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 cache add k8s.gcr.io/pause:3.3
functional_test.go:982: (dbg) Done: out/minikube-linux-arm64 -p functional-20210813035500-2022292 cache add k8s.gcr.io/pause:3.3: (1.910600582s)
functional_test.go:982: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 cache add k8s.gcr.io/pause:latest
functional_test.go:982: (dbg) Done: out/minikube-linux-arm64 -p functional-20210813035500-2022292 cache add k8s.gcr.io/pause:latest: (1.820157603s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (5.84s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1012: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20210813035500-2022292 /tmp/functional-20210813035500-2022292849018886
functional_test.go:1024: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 cache add minikube-local-cache-test:functional-20210813035500-2022292
functional_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 cache delete minikube-local-cache-test:functional-20210813035500-2022292
functional_test.go:1018: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20210813035500-2022292
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1036: (dbg) Run:  out/minikube-linux-arm64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1043: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1056: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1078: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 ssh sudo crictl rmi k8s.gcr.io/pause:latest
functional_test.go:1084: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1084: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-20210813035500-2022292 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (270.799966ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 cache reload
functional_test.go:1089: (dbg) Done: out/minikube-linux-arm64 -p functional-20210813035500-2022292 cache reload: (1.543506529s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1103: (dbg) Run:  out/minikube-linux-arm64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1103: (dbg) Run:  out/minikube-linux-arm64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:678: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 kubectl -- --context functional-20210813035500-2022292 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.44s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:701: (dbg) Run:  out/kubectl --context functional-20210813035500-2022292 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (49.15s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:715: (dbg) Run:  out/minikube-linux-arm64 start -p functional-20210813035500-2022292 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0813 03:57:42.408516 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.crt: no such file or directory
functional_test.go:715: (dbg) Done: out/minikube-linux-arm64 start -p functional-20210813035500-2022292 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (49.154624055s)
functional_test.go:719: restart took 49.154723844s for "functional-20210813035500-2022292" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (49.15s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:766: (dbg) Run:  kubectl --context functional-20210813035500-2022292 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:780: etcd phase: Running
functional_test.go:790: etcd status: Ready
functional_test.go:780: kube-apiserver phase: Running
functional_test.go:790: kube-apiserver status: Ready
functional_test.go:780: kube-controller-manager phase: Running
functional_test.go:790: kube-controller-manager status: Ready
functional_test.go:780: kube-scheduler phase: Running
functional_test.go:790: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1165: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 logs
functional_test.go:1165: (dbg) Done: out/minikube-linux-arm64 -p functional-20210813035500-2022292 logs: (1.179231218s)
--- PASS: TestFunctional/serial/LogsCmd (1.18s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1181: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 logs --file /tmp/functional-20210813035500-2022292639549357/logs.txt
E0813 03:58:23.369518 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.crt: no such file or directory
functional_test.go:1181: (dbg) Done: out/minikube-linux-arm64 -p functional-20210813035500-2022292 logs --file /tmp/functional-20210813035500-2022292639549357/logs.txt: (1.110889081s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 config get cpus
functional_test.go:1129: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-20210813035500-2022292 config get cpus: exit status 14 (57.755659ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1129: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 config set cpus 2
functional_test.go:1129: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 config get cpus
functional_test.go:1129: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 config get cpus
functional_test.go:1129: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-20210813035500-2022292 config get cpus: exit status 14 (69.648384ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (2.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:857: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-20210813035500-2022292 --alsologtostderr -v=1]
2021/08/13 04:04:12 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:862: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-20210813035500-2022292 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to kill pid 2060265: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (2.66s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:919: (dbg) Run:  out/minikube-linux-arm64 start -p functional-20210813035500-2022292 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:919: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-20210813035500-2022292 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (246.555593ms)

                                                
                                                
-- stdout --
	* [functional-20210813035500-2022292] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=12230
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 04:04:08.952056 2060001 out.go:298] Setting OutFile to fd 1 ...
	I0813 04:04:08.952211 2060001 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 04:04:08.952220 2060001 out.go:311] Setting ErrFile to fd 2...
	I0813 04:04:08.952223 2060001 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 04:04:08.952350 2060001 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0813 04:04:08.952587 2060001 out.go:305] Setting JSON to false
	I0813 04:04:08.953302 2060001 start.go:111] hostinfo: {"hostname":"ip-172-31-30-239","uptime":49593,"bootTime":1628777856,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0813 04:04:08.953363 2060001 start.go:121] virtualization:  
	I0813 04:04:08.955837 2060001 out.go:177] * [functional-20210813035500-2022292] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0813 04:04:08.958269 2060001 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 04:04:08.960100 2060001 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 04:04:08.961825 2060001 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0813 04:04:08.963632 2060001 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0813 04:04:08.964516 2060001 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 04:04:09.012732 2060001 docker.go:132] docker version: linux-20.10.8
	I0813 04:04:09.012821 2060001 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 04:04:09.130244 2060001 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:19 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-13 04:04:09.050643004 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0813 04:04:09.130346 2060001 docker.go:244] overlay module found
	I0813 04:04:09.132572 2060001 out.go:177] * Using the docker driver based on existing profile
	I0813 04:04:09.132592 2060001 start.go:278] selected driver: docker
	I0813 04:04:09.132597 2060001 start.go:751] validating driver "docker" against &{Name:functional-20210813035500-2022292 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:functional-20210813035500-2022292 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-a
liases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 04:04:09.132716 2060001 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 04:04:09.132748 2060001 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 04:04:09.132759 2060001 out.go:242] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0813 04:04:09.134718 2060001 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 04:04:09.136842 2060001 out.go:177] 
	W0813 04:04:09.136933 2060001 out.go:242] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0813 04:04:09.138880 2060001 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:934: (dbg) Run:  out/minikube-linux-arm64 start -p functional-20210813035500-2022292 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:956: (dbg) Run:  out/minikube-linux-arm64 start -p functional-20210813035500-2022292 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:956: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-20210813035500-2022292 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (211.767847ms)

                                                
                                                
-- stdout --
	* [functional-20210813035500-2022292] minikube v1.22.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=12230
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	* Utilisation du pilote docker basé sur le profil existant
	  - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 04:04:09.490699 2060113 out.go:298] Setting OutFile to fd 1 ...
	I0813 04:04:09.490780 2060113 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 04:04:09.490790 2060113 out.go:311] Setting ErrFile to fd 2...
	I0813 04:04:09.490794 2060113 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 04:04:09.490957 2060113 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0813 04:04:09.491168 2060113 out.go:305] Setting JSON to false
	I0813 04:04:09.491964 2060113 start.go:111] hostinfo: {"hostname":"ip-172-31-30-239","uptime":49593,"bootTime":1628777856,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0813 04:04:09.492032 2060113 start.go:121] virtualization:  
	I0813 04:04:09.494172 2060113 out.go:177] * [functional-20210813035500-2022292] minikube v1.22.0 sur Ubuntu 20.04 (arm64)
	I0813 04:04:09.496967 2060113 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 04:04:09.498796 2060113 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 04:04:09.500530 2060113 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0813 04:04:09.502466 2060113 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0813 04:04:09.503262 2060113 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 04:04:09.541053 2060113 docker.go:132] docker version: linux-20.10.8
	I0813 04:04:09.541132 2060113 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 04:04:09.620687 2060113 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:19 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-13 04:04:09.56905277 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInf
o:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0813 04:04:09.620776 2060113 docker.go:244] overlay module found
	I0813 04:04:09.628360 2060113 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0813 04:04:09.628382 2060113 start.go:278] selected driver: docker
	I0813 04:04:09.628388 2060113 start.go:751] validating driver "docker" against &{Name:functional-20210813035500-2022292 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:functional-20210813035500-2022292 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-a
liases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0813 04:04:09.628498 2060113 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 04:04:09.628530 2060113 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 04:04:09.628538 2060113 out.go:242] ! Votre groupe de contrôle ne permet pas de définir la mémoire.
	! Votre groupe de contrôle ne permet pas de définir la mémoire.
	I0813 04:04:09.630415 2060113 out.go:177]   - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 04:04:09.632709 2060113 out.go:177] 
	W0813 04:04:09.632803 2060113 out.go:242] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0813 04:04:09.634654 2060113 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:809: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 status
functional_test.go:815: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:826: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (11.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1355: (dbg) Run:  kubectl --context functional-20210813035500-2022292 create deployment hello-node --image=k8s.gcr.io/echoserver-arm:1.8
functional_test.go:1363: (dbg) Run:  kubectl --context functional-20210813035500-2022292 expose deployment hello-node --type=NodePort --port=8080

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1368: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:343: "hello-node-6d98884d59-8txch" [71b13262-26a1-4415-b188-9c796b4f9b6c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:343: "hello-node-6d98884d59-8txch" [71b13262-26a1-4415-b188-9c796b4f9b6c] Running
functional_test.go:1368: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 10.026716076s
functional_test.go:1372: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 service list
functional_test.go:1385: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 service --namespace=default --https --url hello-node
functional_test.go:1394: found endpoint: https://192.168.49.2:32405
functional_test.go:1405: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 service hello-node --url --format={{.IP}}
functional_test.go:1414: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 service hello-node --url
functional_test.go:1420: found endpoint for hello-node: http://192.168.49.2:32405
functional_test.go:1431: Attempting to fetch http://192.168.49.2:32405 ...
functional_test.go:1450: http://192.168.49.2:32405: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-6d98884d59-8txch

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32405
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmd (11.62s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1465: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 addons list
functional_test.go:1476: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1498: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 ssh "echo hello"
functional_test.go:1515: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:549: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 ssh "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1678: Checking for existence of /etc/test/nested/copy/2022292/hosts within VM
functional_test.go:1679: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 ssh "sudo cat /etc/test/nested/copy/2022292/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1684: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1719: Checking for existence of /etc/ssl/certs/2022292.pem within VM
functional_test.go:1720: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 ssh "sudo cat /etc/ssl/certs/2022292.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1719: Checking for existence of /usr/share/ca-certificates/2022292.pem within VM
functional_test.go:1720: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 ssh "sudo cat /usr/share/ca-certificates/2022292.pem"
functional_test.go:1719: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1720: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1746: Checking for existence of /etc/ssl/certs/20222922.pem within VM
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 ssh "sudo cat /etc/ssl/certs/20222922.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1746: Checking for existence of /usr/share/ca-certificates/20222922.pem within VM
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 ssh "sudo cat /usr/share/ca-certificates/20222922.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1746: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:216: (dbg) Run:  kubectl --context functional-20210813035500-2022292 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/LoadImage (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/LoadImage
=== PAUSE TestFunctional/parallel/LoadImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:239: (dbg) Run:  docker pull busybox:1.33

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:246: (dbg) Run:  docker tag busybox:1.33 docker.io/library/busybox:load-functional-20210813035500-2022292
functional_test.go:252: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 image load docker.io/library/busybox:load-functional-20210813035500-2022292

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:252: (dbg) Done: out/minikube-linux-arm64 -p functional-20210813035500-2022292 image load docker.io/library/busybox:load-functional-20210813035500-2022292: (1.016577577s)
functional_test.go:373: (dbg) Run:  out/minikube-linux-arm64 ssh -p functional-20210813035500-2022292 -- sudo crictl inspecti docker.io/library/busybox:load-functional-20210813035500-2022292
--- PASS: TestFunctional/parallel/LoadImage (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/RemoveImage (2.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/RemoveImage
=== PAUSE TestFunctional/parallel/RemoveImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:331: (dbg) Run:  docker pull busybox:1.32
functional_test.go:338: (dbg) Run:  docker tag busybox:1.32 docker.io/library/busybox:remove-functional-20210813035500-2022292
functional_test.go:344: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 image load docker.io/library/busybox:remove-functional-20210813035500-2022292
functional_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 image rm docker.io/library/busybox:remove-functional-20210813035500-2022292
functional_test.go:387: (dbg) Run:  out/minikube-linux-arm64 ssh -p functional-20210813035500-2022292 -- sudo crictl images
--- PASS: TestFunctional/parallel/RemoveImage (2.08s)

                                                
                                    
x
+
TestFunctional/parallel/LoadImageFromFile (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/LoadImageFromFile
=== PAUSE TestFunctional/parallel/LoadImageFromFile

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:279: (dbg) Run:  docker pull busybox:1.31
functional_test.go:286: (dbg) Run:  docker tag busybox:1.31 docker.io/library/busybox:load-from-file-functional-20210813035500-2022292
functional_test.go:293: (dbg) Run:  docker save -o busybox.tar docker.io/library/busybox:load-from-file-functional-20210813035500-2022292
functional_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/busybox.tar
functional_test.go:387: (dbg) Run:  out/minikube-linux-arm64 ssh -p functional-20210813035500-2022292 -- sudo crictl images
--- PASS: TestFunctional/parallel/LoadImageFromFile (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/BuildImage (3.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/BuildImage
=== PAUSE TestFunctional/parallel/BuildImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 image build -t localhost/my-image:functional-20210813035500-2022292 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:407: (dbg) Done: out/minikube-linux-arm64 -p functional-20210813035500-2022292 image build -t localhost/my-image:functional-20210813035500-2022292 testdata/build: (3.285080784s)
functional_test.go:415: (dbg) Stderr: out/minikube-linux-arm64 -p functional-20210813035500-2022292 image build -t localhost/my-image:functional-20210813035500-2022292 testdata/build:
#1 [internal] load build definition from Dockerfile
#1 sha256:d4c12f7b86ffd15b7badf4727b52c4c0afc087962b48e2b0e3ba244f0b842356
#1 transferring dockerfile: 77B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load .dockerignore
#2 sha256:ab29f9fa5e0a41e3b06269f8764f89c7513cdbbac25e36aa80e2e35d93eb64f5
#2 transferring context: 2B done
#2 DONE 0.0s

                                                
                                                
#3 [internal] load metadata for docker.io/library/busybox:latest
#3 sha256:0a79f84dd133133d07fb884b418e9bf0d7cf3d0c3447f64349c6cd669102adf5
#3 DONE 0.7s

                                                
                                                
#6 [internal] load build context
#6 sha256:2b197becb99826633bf919d7835bb76f79d294866df49e58d4b530fbf408e6fb
#6 transferring context: 62B done
#6 DONE 0.0s

                                                
                                                
#4 [1/3] FROM docker.io/library/busybox@sha256:0f354ec1728d9ff32edcd7d1b8bbdfc798277ad36120dc3dc683be44524c8b60
#4 sha256:421de0c542c41754c031ad99bde55bd9e3f4a39ada1375c25d50d762b9b99eee
#4 resolve docker.io/library/busybox@sha256:0f354ec1728d9ff32edcd7d1b8bbdfc798277ad36120dc3dc683be44524c8b60 0.0s done
#4 extracting sha256:38cc3b49dbab817c9404b9a301d1f673d4b0c2e3497dbcfbea2be77516679682
#4 extracting sha256:38cc3b49dbab817c9404b9a301d1f673d4b0c2e3497dbcfbea2be77516679682 0.1s done
#4 DONE 0.2s

                                                
                                                
#5 [2/3] RUN true
#5 sha256:8215da3aa129540f1c809b7cd79e8adeabbccedfba0d395307bc8627e9f76c0a
#5 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 sha256:e08234ae97c26754c0224d37e3c17bd3915f9a4f1a5f22838dfabf778c9fddeb
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 sha256:e8c613e07b0b7ff33893b694f7759a10d42e180f2b4dc349fb57dc6b71dcab00
#8 exporting layers
#8 exporting layers 0.3s done
#8 exporting manifest sha256:de3885bdc3bdba2944d2f5020578aff5af09caee2f8003755625ed6f6832e678 0.0s done
#8 exporting config sha256:247c976fd76663d5f7cfce6a9079187af1d90a1f8dbfcf918cc361821ad41b52 0.0s done
#8 naming to localhost/my-image:functional-20210813035500-2022292 done
#8 DONE 0.3s
functional_test.go:373: (dbg) Run:  out/minikube-linux-arm64 ssh -p functional-20210813035500-2022292 -- sudo crictl inspecti localhost/my-image:functional-20210813035500-2022292
--- PASS: TestFunctional/parallel/BuildImage (3.65s)

                                                
                                    
x
+
TestFunctional/parallel/ListImages (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ListImages
=== PAUSE TestFunctional/parallel/ListImages

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ListImages
functional_test.go:441: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ListImages
functional_test.go:446: (dbg) Stdout: out/minikube-linux-arm64 -p functional-20210813035500-2022292 image ls:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.4.1
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.21.3
k8s.gcr.io/kube-proxy:v1.21.3
k8s.gcr.io/kube-controller-manager:v1.21.3
k8s.gcr.io/kube-apiserver:v1.21.3
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns/coredns:v1.8.0
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-20210813035500-2022292
docker.io/kubernetesui/metrics-scraper:v1.0.4
docker.io/kubernetesui/dashboard:v2.1.0
docker.io/kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestFunctional/parallel/ListImages (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1774: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 ssh "sudo systemctl is-active docker"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1774: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-20210813035500-2022292 ssh "sudo systemctl is-active docker": exit status 1 (314.931441ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:1774: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1774: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-20210813035500-2022292 ssh "sudo systemctl is-active crio": exit status 1 (373.173071ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2003: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2016: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 version -o=json --components
functional_test.go:2016: (dbg) Done: out/minikube-linux-arm64 -p functional-20210813035500-2022292 version -o=json --components: (1.267957094s)
--- PASS: TestFunctional/parallel/Version/components (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1865: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1865: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1865: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:126: (dbg) daemon: [out/minikube-linux-arm64 -p functional-20210813035500-2022292 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1202: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1206: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1240: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1245: Took "286.086194ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1254: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1259: Took "63.910313ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1295: Took "293.536144ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1303: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1308: Took "61.135573ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:364: (dbg) stopping [out/minikube-linux-arm64 -p functional-20210813035500-2022292 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:225: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-20210813035500-2022292 /tmp/mounttest777679665:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:255: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-20210813035500-2022292 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (311.506926ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:269: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 ssh -- ls -la /mount-9p
functional_test_mount_test.go:273: guest mount directory contents
total 0
functional_test_mount_test.go:275: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-20210813035500-2022292 /tmp/mounttest777679665:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:276: reading mount text
functional_test_mount_test.go:290: done reading mount text
functional_test_mount_test.go:242: (dbg) Run:  out/minikube-linux-arm64 -p functional-20210813035500-2022292 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:242: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-20210813035500-2022292 ssh "sudo umount -f /mount-9p": exit status 1 (257.350115ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:244: "out/minikube-linux-arm64 -p functional-20210813035500-2022292 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:246: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-20210813035500-2022292 /tmp/mounttest777679665:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.62s)

                                                
                                    
x
+
TestFunctional/delete_busybox_image (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_busybox_image
functional_test.go:183: (dbg) Run:  docker rmi -f docker.io/library/busybox:load-functional-20210813035500-2022292
functional_test.go:188: (dbg) Run:  docker rmi -f docker.io/library/busybox:remove-functional-20210813035500-2022292
--- PASS: TestFunctional/delete_busybox_image (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:195: (dbg) Run:  docker rmi -f localhost/my-image:functional-20210813035500-2022292
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:203: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20210813035500-2022292
--- PASS: TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.27s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:146: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-20210813040825-2022292 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:146: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-20210813040825-2022292 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (64.910343ms)

                                                
                                                
-- stdout --
	{"data":{"currentstep":"0","message":"[json-output-error-20210813040825-2022292] minikube v1.22.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"},"datacontenttype":"application/json","id":"14ebb272-3887-4a1e-bb8d-2985fefe1d23","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"MINIKUBE_LOCATION=12230"},"datacontenttype":"application/json","id":"34faf5df-e15f-4a95-8b2e-a6dcdcb22d2b","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig"},"datacontenttype":"application/json","id":"406d4fcd-aafd-4a35-a66a-b7c2e876041c","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube"},"datacontenttype":"application/json","id":"b38f01bb-c8b9-4c03-a23a-33e49fc96daf","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"},"datacontenttype":"application/json","id":"dc23eaf6-7738-417d-9585-77c36c3ef459","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""},"datacontenttype":"application/json","id":"badd29ef-d0cc-46d5-957d-a09b4a73d2e7","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-20210813040825-2022292" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-20210813040825-2022292
--- PASS: TestErrorJSONOutput (0.27s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (56.27s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-20210813040825-2022292 --network=
E0813 04:08:29.759217 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813035500-2022292/client.crt: no such file or directory
E0813 04:08:29.764898 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813035500-2022292/client.crt: no such file or directory
E0813 04:08:29.775101 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813035500-2022292/client.crt: no such file or directory
E0813 04:08:29.795295 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813035500-2022292/client.crt: no such file or directory
E0813 04:08:29.835515 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813035500-2022292/client.crt: no such file or directory
E0813 04:08:29.915723 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813035500-2022292/client.crt: no such file or directory
E0813 04:08:30.076078 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813035500-2022292/client.crt: no such file or directory
E0813 04:08:30.396554 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813035500-2022292/client.crt: no such file or directory
E0813 04:08:31.037499 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813035500-2022292/client.crt: no such file or directory
E0813 04:08:32.318100 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813035500-2022292/client.crt: no such file or directory
E0813 04:08:34.878273 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813035500-2022292/client.crt: no such file or directory
E0813 04:08:39.999402 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813035500-2022292/client.crt: no such file or directory
E0813 04:08:50.240391 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813035500-2022292/client.crt: no such file or directory
E0813 04:09:10.720570 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813035500-2022292/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-20210813040825-2022292 --network=: (53.963970197s)
kic_custom_network_test.go:101: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20210813040825-2022292" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-20210813040825-2022292
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-20210813040825-2022292: (2.272284745s)
--- PASS: TestKicCustomNetwork/create_custom_network (56.27s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (44.27s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-20210813040921-2022292 --network=bridge
E0813 04:09:51.681499 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813035500-2022292/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-20210813040921-2022292 --network=bridge: (42.089067146s)
kic_custom_network_test.go:101: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20210813040921-2022292" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-20210813040921-2022292
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-20210813040921-2022292: (2.131358026s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (44.27s)

                                                
                                    
x
+
TestKicExistingNetwork (45.68s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:101: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-20210813041006-2022292 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-20210813041006-2022292 --network=existing-network: (43.186182867s)
helpers_test.go:176: Cleaning up "existing-network-20210813041006-2022292" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-20210813041006-2022292
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-20210813041006-2022292: (2.276943185s)
--- PASS: TestKicExistingNetwork (45.68s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (130.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-20210813041051-2022292 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0813 04:11:13.602255 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813035500-2022292/client.crt: no such file or directory
E0813 04:12:01.447138 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.crt: no such file or directory
multinode_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p multinode-20210813041051-2022292 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m9.803546349s)
multinode_test.go:87: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210813041051-2022292 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (130.31s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:462: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210813041051-2022292 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:467: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210813041051-2022292 -- rollout status deployment/busybox
multinode_test.go:467: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-20210813041051-2022292 -- rollout status deployment/busybox: (2.32230533s)
multinode_test.go:473: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210813041051-2022292 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:485: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210813041051-2022292 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210813041051-2022292 -- exec busybox-84b6686758-k8l75 -- nslookup kubernetes.io
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210813041051-2022292 -- exec busybox-84b6686758-l8w8h -- nslookup kubernetes.io
multinode_test.go:503: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210813041051-2022292 -- exec busybox-84b6686758-k8l75 -- nslookup kubernetes.default
multinode_test.go:503: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210813041051-2022292 -- exec busybox-84b6686758-l8w8h -- nslookup kubernetes.default
multinode_test.go:511: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210813041051-2022292 -- exec busybox-84b6686758-k8l75 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:511: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210813041051-2022292 -- exec busybox-84b6686758-l8w8h -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.76s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210813041051-2022292 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:529: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210813041051-2022292 -- exec busybox-84b6686758-k8l75 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:537: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210813041051-2022292 -- exec busybox-84b6686758-k8l75 -- sh -c "ping -c 1 192.168.49.1"
multinode_test.go:529: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210813041051-2022292 -- exec busybox-84b6686758-l8w8h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:537: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-20210813041051-2022292 -- exec busybox-84b6686758-l8w8h -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.08s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (42.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:106: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-20210813041051-2022292 -v 3 --alsologtostderr
E0813 04:13:24.491266 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.crt: no such file or directory
E0813 04:13:29.759854 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813035500-2022292/client.crt: no such file or directory
multinode_test.go:106: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-20210813041051-2022292 -v 3 --alsologtostderr: (41.487416715s)
multinode_test.go:112: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210813041051-2022292 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (42.23s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:128: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (2.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:169: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210813041051-2022292 status --output json --alsologtostderr
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210813041051-2022292 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:549: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210813041051-2022292 ssh "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210813041051-2022292 cp testdata/cp-test.txt multinode-20210813041051-2022292-m02:/home/docker/cp-test.txt
helpers_test.go:549: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210813041051-2022292 ssh -n multinode-20210813041051-2022292-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210813041051-2022292 cp testdata/cp-test.txt multinode-20210813041051-2022292-m03:/home/docker/cp-test.txt
helpers_test.go:549: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210813041051-2022292 ssh -n multinode-20210813041051-2022292-m03 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestMultiNode/serial/CopyFile (2.33s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (21.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:191: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210813041051-2022292 node stop m03
E0813 04:13:57.442964 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813035500-2022292/client.crt: no such file or directory
multinode_test.go:191: (dbg) Done: out/minikube-linux-arm64 -p multinode-20210813041051-2022292 node stop m03: (20.059914273s)
multinode_test.go:197: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210813041051-2022292 status
multinode_test.go:197: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-20210813041051-2022292 status: exit status 7 (538.71631ms)

                                                
                                                
-- stdout --
	multinode-20210813041051-2022292
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210813041051-2022292-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210813041051-2022292-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:204: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210813041051-2022292 status --alsologtostderr
multinode_test.go:204: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-20210813041051-2022292 status --alsologtostderr: exit status 7 (536.193138ms)

                                                
                                                
-- stdout --
	multinode-20210813041051-2022292
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210813041051-2022292-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210813041051-2022292-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 04:14:13.273678 2084021 out.go:298] Setting OutFile to fd 1 ...
	I0813 04:14:13.273796 2084021 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 04:14:13.273808 2084021 out.go:311] Setting ErrFile to fd 2...
	I0813 04:14:13.273812 2084021 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 04:14:13.273933 2084021 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0813 04:14:13.274120 2084021 out.go:305] Setting JSON to false
	I0813 04:14:13.274145 2084021 mustload.go:65] Loading cluster: multinode-20210813041051-2022292
	I0813 04:14:13.274468 2084021 status.go:253] checking status of multinode-20210813041051-2022292 ...
	I0813 04:14:13.274949 2084021 cli_runner.go:115] Run: docker container inspect multinode-20210813041051-2022292 --format={{.State.Status}}
	I0813 04:14:13.314081 2084021 status.go:328] multinode-20210813041051-2022292 host status = "Running" (err=<nil>)
	I0813 04:14:13.314107 2084021 host.go:66] Checking if "multinode-20210813041051-2022292" exists ...
	I0813 04:14:13.314403 2084021 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210813041051-2022292
	I0813 04:14:13.346496 2084021 host.go:66] Checking if "multinode-20210813041051-2022292" exists ...
	I0813 04:14:13.346830 2084021 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 04:14:13.346878 2084021 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813041051-2022292
	I0813 04:14:13.377845 2084021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50838 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813041051-2022292/id_rsa Username:docker}
	I0813 04:14:13.480191 2084021 ssh_runner.go:149] Run: systemctl --version
	I0813 04:14:13.483910 2084021 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 04:14:13.495097 2084021 kubeconfig.go:93] found "multinode-20210813041051-2022292" server: "https://192.168.49.2:8443"
	I0813 04:14:13.495171 2084021 api_server.go:164] Checking apiserver status ...
	I0813 04:14:13.495211 2084021 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0813 04:14:13.507041 2084021 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/1128/cgroup
	I0813 04:14:13.513582 2084021 api_server.go:180] apiserver freezer: "6:freezer:/docker/296ef1675cc7cf974fa363db8c6ec679a1880a4d5b5532c41a485257a2a5fa43/kubepods/burstable/pode2d147d6b3df6d3e5bce6b75773af958/b2b1763f2c79d961878bc31e4520511831e1937d1fe112f716d82e388458b09e"
	I0813 04:14:13.513653 2084021 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/docker/296ef1675cc7cf974fa363db8c6ec679a1880a4d5b5532c41a485257a2a5fa43/kubepods/burstable/pode2d147d6b3df6d3e5bce6b75773af958/b2b1763f2c79d961878bc31e4520511831e1937d1fe112f716d82e388458b09e/freezer.state
	I0813 04:14:13.519513 2084021 api_server.go:202] freezer state: "THAWED"
	I0813 04:14:13.519537 2084021 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0813 04:14:13.528203 2084021 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0813 04:14:13.528224 2084021 status.go:419] multinode-20210813041051-2022292 apiserver status = Running (err=<nil>)
	I0813 04:14:13.528233 2084021 status.go:255] multinode-20210813041051-2022292 status: &{Name:multinode-20210813041051-2022292 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0813 04:14:13.528275 2084021 status.go:253] checking status of multinode-20210813041051-2022292-m02 ...
	I0813 04:14:13.528586 2084021 cli_runner.go:115] Run: docker container inspect multinode-20210813041051-2022292-m02 --format={{.State.Status}}
	I0813 04:14:13.560182 2084021 status.go:328] multinode-20210813041051-2022292-m02 host status = "Running" (err=<nil>)
	I0813 04:14:13.560212 2084021 host.go:66] Checking if "multinode-20210813041051-2022292-m02" exists ...
	I0813 04:14:13.560540 2084021 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210813041051-2022292-m02
	I0813 04:14:13.592368 2084021 host.go:66] Checking if "multinode-20210813041051-2022292-m02" exists ...
	I0813 04:14:13.592724 2084021 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0813 04:14:13.592775 2084021 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210813041051-2022292-m02
	I0813 04:14:13.625074 2084021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50843 SSHKeyPath:/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/machines/multinode-20210813041051-2022292-m02/id_rsa Username:docker}
	I0813 04:14:13.704385 2084021 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0813 04:14:13.712908 2084021 status.go:255] multinode-20210813041051-2022292-m02 status: &{Name:multinode-20210813041051-2022292-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0813 04:14:13.712936 2084021 status.go:253] checking status of multinode-20210813041051-2022292-m03 ...
	I0813 04:14:13.713237 2084021 cli_runner.go:115] Run: docker container inspect multinode-20210813041051-2022292-m03 --format={{.State.Status}}
	I0813 04:14:13.746299 2084021 status.go:328] multinode-20210813041051-2022292-m03 host status = "Stopped" (err=<nil>)
	I0813 04:14:13.746321 2084021 status.go:341] host is not running, skipping remaining checks
	I0813 04:14:13.746327 2084021 status.go:255] multinode-20210813041051-2022292-m03 status: &{Name:multinode-20210813041051-2022292-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (21.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (30.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:225: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:235: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210813041051-2022292 node start m03 --alsologtostderr
multinode_test.go:235: (dbg) Done: out/minikube-linux-arm64 -p multinode-20210813041051-2022292 node start m03 --alsologtostderr: (29.398484862s)
multinode_test.go:242: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210813041051-2022292 status
multinode_test.go:256: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (30.24s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (188.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:264: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-20210813041051-2022292
multinode_test.go:271: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-20210813041051-2022292
multinode_test.go:271: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-20210813041051-2022292: (59.946282449s)
multinode_test.go:276: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-20210813041051-2022292 --wait=true -v=8 --alsologtostderr
E0813 04:17:01.447866 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.crt: no such file or directory
multinode_test.go:276: (dbg) Done: out/minikube-linux-arm64 start -p multinode-20210813041051-2022292 --wait=true -v=8 --alsologtostderr: (2m7.932838702s)
multinode_test.go:281: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-20210813041051-2022292
--- PASS: TestMultiNode/serial/RestartKeepsNodes (188.02s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (24.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:375: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210813041051-2022292 node delete m03
multinode_test.go:375: (dbg) Done: out/minikube-linux-arm64 -p multinode-20210813041051-2022292 node delete m03: (23.436775643s)
multinode_test.go:381: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210813041051-2022292 status --alsologtostderr
multinode_test.go:395: (dbg) Run:  docker volume ls
multinode_test.go:405: (dbg) Run:  kubectl get nodes
multinode_test.go:413: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (24.13s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (40.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210813041051-2022292 stop
E0813 04:18:29.758868 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813035500-2022292/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-arm64 -p multinode-20210813041051-2022292 stop: (40.043120155s)
multinode_test.go:301: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210813041051-2022292 status
multinode_test.go:301: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-20210813041051-2022292 status: exit status 7 (116.579682ms)

                                                
                                                
-- stdout --
	multinode-20210813041051-2022292
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210813041051-2022292-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210813041051-2022292 status --alsologtostderr
multinode_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-20210813041051-2022292 status --alsologtostderr: exit status 7 (121.495666ms)

                                                
                                                
-- stdout --
	multinode-20210813041051-2022292
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210813041051-2022292-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 04:18:56.355961 2093428 out.go:298] Setting OutFile to fd 1 ...
	I0813 04:18:56.356056 2093428 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 04:18:56.356068 2093428 out.go:311] Setting ErrFile to fd 2...
	I0813 04:18:56.356071 2093428 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 04:18:56.356207 2093428 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0813 04:18:56.356394 2093428 out.go:305] Setting JSON to false
	I0813 04:18:56.356422 2093428 mustload.go:65] Loading cluster: multinode-20210813041051-2022292
	I0813 04:18:56.356780 2093428 status.go:253] checking status of multinode-20210813041051-2022292 ...
	I0813 04:18:56.357256 2093428 cli_runner.go:115] Run: docker container inspect multinode-20210813041051-2022292 --format={{.State.Status}}
	I0813 04:18:56.388451 2093428 status.go:328] multinode-20210813041051-2022292 host status = "Stopped" (err=<nil>)
	I0813 04:18:56.388468 2093428 status.go:341] host is not running, skipping remaining checks
	I0813 04:18:56.388473 2093428 status.go:255] multinode-20210813041051-2022292 status: &{Name:multinode-20210813041051-2022292 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0813 04:18:56.388494 2093428 status.go:253] checking status of multinode-20210813041051-2022292-m02 ...
	I0813 04:18:56.388806 2093428 cli_runner.go:115] Run: docker container inspect multinode-20210813041051-2022292-m02 --format={{.State.Status}}
	I0813 04:18:56.418434 2093428 status.go:328] multinode-20210813041051-2022292-m02 host status = "Stopped" (err=<nil>)
	I0813 04:18:56.418466 2093428 status.go:341] host is not running, skipping remaining checks
	I0813 04:18:56.418472 2093428 status.go:255] multinode-20210813041051-2022292-m02 status: &{Name:multinode-20210813041051-2022292-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (40.28s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (90.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:325: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:335: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-20210813041051-2022292 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:335: (dbg) Done: out/minikube-linux-arm64 start -p multinode-20210813041051-2022292 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m29.759394318s)
multinode_test.go:341: (dbg) Run:  out/minikube-linux-arm64 -p multinode-20210813041051-2022292 status --alsologtostderr
multinode_test.go:355: (dbg) Run:  kubectl get nodes
multinode_test.go:363: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (90.49s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (55.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:424: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-20210813041051-2022292
multinode_test.go:433: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-20210813041051-2022292-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:433: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-20210813041051-2022292-m02 --driver=docker  --container-runtime=containerd: exit status 14 (75.782523ms)

                                                
                                                
-- stdout --
	* [multinode-20210813041051-2022292-m02] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=12230
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20210813041051-2022292-m02' is duplicated with machine name 'multinode-20210813041051-2022292-m02' in profile 'multinode-20210813041051-2022292'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:441: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-20210813041051-2022292-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:441: (dbg) Done: out/minikube-linux-arm64 start -p multinode-20210813041051-2022292-m03 --driver=docker  --container-runtime=containerd: (52.225607367s)
multinode_test.go:448: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-20210813041051-2022292
multinode_test.go:448: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-20210813041051-2022292: exit status 80 (331.343487ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20210813041051-2022292
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20210813041051-2022292-m03 already exists in multinode-20210813041051-2022292-m03 profile
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	╭─────────────────────────────────────────────────────────────────────────────╮
	│                                                                             │
	│    * If the above advice does not help, please let us know:                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose               │
	│                                                                             │
	│    * Please attach the following file to the GitHub issue:                  │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:453: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-20210813041051-2022292-m03
multinode_test.go:453: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-20210813041051-2022292-m03: (3.082737156s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (55.79s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_debian:sid/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_debian:sid/minikube
--- PASS: TestDebPackageInstall/install_arm64_debian:sid/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_debian:sid/kvm2-driver (12.28s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_debian:sid/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_arm64/out:/var/tmp debian:sid sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_arm64/out:/var/tmp debian:sid sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb": (12.280615561s)
--- PASS: TestDebPackageInstall/install_arm64_debian:sid/kvm2-driver (12.28s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_debian:latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_debian:latest/minikube
--- PASS: TestDebPackageInstall/install_arm64_debian:latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_debian:latest/kvm2-driver (10.67s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_debian:latest/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_arm64/out:/var/tmp debian:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_arm64/out:/var/tmp debian:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb": (10.66628152s)
--- PASS: TestDebPackageInstall/install_arm64_debian:latest/kvm2-driver (10.67s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_debian:10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_debian:10/minikube
--- PASS: TestDebPackageInstall/install_arm64_debian:10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_debian:10/kvm2-driver (10.16s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_debian:10/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_arm64/out:/var/tmp debian:10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_arm64/out:/var/tmp debian:10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb": (10.161924258s)
--- PASS: TestDebPackageInstall/install_arm64_debian:10/kvm2-driver (10.16s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_debian:9/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_debian:9/minikube
--- PASS: TestDebPackageInstall/install_arm64_debian:9/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_debian:9/kvm2-driver (9s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_debian:9/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_arm64/out:/var/tmp debian:9 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb"
E0813 04:22:01.447776 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.crt: no such file or directory
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_arm64/out:/var/tmp debian:9 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb": (8.999117205s)
--- PASS: TestDebPackageInstall/install_arm64_debian:9/kvm2-driver (9.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_ubuntu:latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_ubuntu:latest/minikube
--- PASS: TestDebPackageInstall/install_arm64_ubuntu:latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_ubuntu:latest/kvm2-driver (13.57s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_ubuntu:latest/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_arm64/out:/var/tmp ubuntu:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_arm64/out:/var/tmp ubuntu:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb": (13.565331264s)
--- PASS: TestDebPackageInstall/install_arm64_ubuntu:latest/kvm2-driver (13.57s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_ubuntu:20.10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_ubuntu:20.10/minikube
--- PASS: TestDebPackageInstall/install_arm64_ubuntu:20.10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_ubuntu:20.10/kvm2-driver (12.86s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_ubuntu:20.10/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_arm64/out:/var/tmp ubuntu:20.10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_arm64/out:/var/tmp ubuntu:20.10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb": (12.862960819s)
--- PASS: TestDebPackageInstall/install_arm64_ubuntu:20.10/kvm2-driver (12.86s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_ubuntu:20.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_ubuntu:20.04/minikube
--- PASS: TestDebPackageInstall/install_arm64_ubuntu:20.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_ubuntu:20.04/kvm2-driver (13.27s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_ubuntu:20.04/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_arm64/out:/var/tmp ubuntu:20.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_arm64/out:/var/tmp ubuntu:20.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb": (13.272314174s)
--- PASS: TestDebPackageInstall/install_arm64_ubuntu:20.04/kvm2-driver (13.27s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_ubuntu:18.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_ubuntu:18.04/minikube
--- PASS: TestDebPackageInstall/install_arm64_ubuntu:18.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_arm64_ubuntu:18.04/kvm2-driver (11.56s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_arm64_ubuntu:18.04/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_arm64/out:/var/tmp ubuntu:18.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_arm64/out:/var/tmp ubuntu:18.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_arm64.deb": (11.563788035s)
--- PASS: TestDebPackageInstall/install_arm64_ubuntu:18.04/kvm2-driver (11.56s)

                                                
                                    
x
+
TestInsufficientStorage (22.37s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-20210813042447-2022292 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
E0813 04:24:52.803936 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813035500-2022292/client.crt: no such file or directory
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-20210813042447-2022292 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (15.679293698s)

                                                
                                                
-- stdout --
	{"data":{"currentstep":"0","message":"[insufficient-storage-20210813042447-2022292] minikube v1.22.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"},"datacontenttype":"application/json","id":"56528690-8e18-4edc-9bfe-85789bc0dbac","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"MINIKUBE_LOCATION=12230"},"datacontenttype":"application/json","id":"8ee7f932-b87d-4674-88d3-2a5873377a4f","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig"},"datacontenttype":"application/json","id":"2e486cd1-7dc6-409b-8208-c0a645ad4b7f","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube"},"datacontenttype":"application/json","id":"99ea2244-ef18-4537-a986-e54a9d2100a3","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"},"datacontenttype":"application/json","id":"edb4cab7-4caf-4b86-aa03-d743ed385f03","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"},"datacontenttype":"application/json","id":"fa3f5873-5a98-4000-a48e-d3e142ab65d8","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"},"datacontenttype":"application/json","id":"43839ccd-ba75-4d37-b2fe-fc24b0f6f958","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"Your cgroup does not allow setting memory."},"datacontenttype":"application/json","id":"2ef8395f-f21d-4d0e-a052-e955dacd24f5","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.warning"}
	{"data":{"message":"More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities"},"datacontenttype":"application/json","id":"36a68f3b-3a0c-4fd8-87cc-88a0b6b9b9b7","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20210813042447-2022292 in cluster insufficient-storage-20210813042447-2022292","name":"Starting Node","totalsteps":"19"},"datacontenttype":"application/json","id":"8cc8b4ce-4d5c-4870-905d-b8427225646a","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"},"datacontenttype":"application/json","id":"b3b1bd17-17da-435c-8bb3-cf04006f6a97","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"},"datacontenttype":"application/json","id":"b082abbb-b1f8-4849-9725-27a3581f6d06","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity)","name":"RSRC_DOCKER_STORAGE","url":""},"datacontenttype":"application/json","id":"20f44b37-da79-48ca-88dd-2d71c57d374d","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-20210813042447-2022292 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-20210813042447-2022292 --output=json --layout=cluster: exit status 7 (271.529245ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20210813042447-2022292","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.22.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20210813042447-2022292","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 04:25:03.436596 2127824 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20210813042447-2022292" does not appear in /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-20210813042447-2022292 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-20210813042447-2022292 --output=json --layout=cluster: exit status 7 (267.783491ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20210813042447-2022292","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.22.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20210813042447-2022292","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0813 04:25:03.704928 2127859 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20210813042447-2022292" does not appear in /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	E0813 04:25:03.713636 2127859 status.go:557] unable to read event log: stat: stat /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/insufficient-storage-20210813042447-2022292/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-20210813042447-2022292" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-20210813042447-2022292
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-20210813042447-2022292: (6.155251367s)
--- PASS: TestInsufficientStorage (22.37s)

                                                
                                    
x
+
TestKubernetesUpgrade (257.2s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:224: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-20210813042631-2022292 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:224: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-20210813042631-2022292 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m17.412770775s)
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-20210813042631-2022292
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-20210813042631-2022292: (20.240510056s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-20210813042631-2022292 status --format={{.Host}}
version_upgrade_test.go:234: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-20210813042631-2022292 status --format={{.Host}}: exit status 7 (88.755587ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:236: status error: exit status 7 (may be ok)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-20210813042631-2022292 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:245: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-20210813042631-2022292 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m20.806544451s)
version_upgrade_test.go:250: (dbg) Run:  kubectl --context kubernetes-upgrade-20210813042631-2022292 version --output=json
version_upgrade_test.go:269: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:271: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-20210813042631-2022292 --memory=2200 --kubernetes-version=v1.14.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:271: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-20210813042631-2022292 --memory=2200 --kubernetes-version=v1.14.0 --driver=docker  --container-runtime=containerd: exit status 106 (84.432822ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20210813042631-2022292] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=12230
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.22.0-rc.0 cluster to v1.14.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.14.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20210813042631-2022292
	    minikube start -p kubernetes-upgrade-20210813042631-2022292 --kubernetes-version=v1.14.0
	    
	    2) Create a second cluster with Kubernetes 1.14.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20210813042631-20222922 --kubernetes-version=v1.14.0
	    
	    3) Use the existing cluster at version Kubernetes 1.22.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20210813042631-2022292 --kubernetes-version=v1.22.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:275: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:277: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-20210813042631-2022292 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:277: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-20210813042631-2022292 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m15.346396055s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-20210813042631-2022292" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-20210813042631-2022292
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-20210813042631-2022292: (3.108147724s)
--- PASS: TestKubernetesUpgrade (257.20s)

                                                
                                    
x
+
TestPause/serial/Start (100.3s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:77: (dbg) Run:  out/minikube-linux-arm64 start -p pause-20210813042509-2022292 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:77: (dbg) Done: out/minikube-linux-arm64 start -p pause-20210813042509-2022292 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m40.297222976s)
--- PASS: TestPause/serial/Start (100.30s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.46s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:89: (dbg) Run:  out/minikube-linux-arm64 start -p pause-20210813042509-2022292 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:89: (dbg) Done: out/minikube-linux-arm64 start -p pause-20210813042509-2022292 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.445141422s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.46s)

                                                
                                    
x
+
TestPause/serial/Pause (0.65s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:107: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-20210813042509-2022292 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.65s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.29s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-20210813042509-2022292 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-20210813042509-2022292 --output=json --layout=cluster: exit status 2 (289.188851ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20210813042509-2022292","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 8 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.22.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20210813042509-2022292","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.29s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.57s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:118: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-20210813042509-2022292 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.57s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (8.74s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:107: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-20210813042509-2022292 --alsologtostderr -v=5
E0813 04:27:01.446945 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.crt: no such file or directory
pause_test.go:107: (dbg) Done: out/minikube-linux-arm64 pause -p pause-20210813042509-2022292 --alsologtostderr -v=5: (8.743614946s)
--- PASS: TestPause/serial/PauseAgain (8.74s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.48s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:129: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-20210813042509-2022292 --alsologtostderr -v=5
pause_test.go:129: (dbg) Done: out/minikube-linux-arm64 delete -p pause-20210813042509-2022292 --alsologtostderr -v=5: (3.478262385s)
--- PASS: TestPause/serial/DeletePaused (3.48s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.47s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:139: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:165: (dbg) Run:  docker ps -a
pause_test.go:170: (dbg) Run:  docker volume inspect pause-20210813042509-2022292
pause_test.go:170: (dbg) Non-zero exit: docker volume inspect pause-20210813042509-2022292: exit status 1 (73.294353ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20210813042509-2022292

                                                
                                                
** /stderr **
--- PASS: TestPause/serial/VerifyDeletedResources (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (0.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:213: (dbg) Run:  out/minikube-linux-arm64 start -p false-20210813042827-2022292 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:213: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-20210813042827-2022292 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (396.904652ms)

                                                
                                                
-- stdout --
	* [false-20210813042827-2022292] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=12230
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0813 04:28:27.914200 2139542 out.go:298] Setting OutFile to fd 1 ...
	I0813 04:28:27.914376 2139542 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 04:28:27.914385 2139542 out.go:311] Setting ErrFile to fd 2...
	I0813 04:28:27.914389 2139542 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0813 04:28:27.914524 2139542 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/bin
	I0813 04:28:27.914789 2139542 out.go:305] Setting JSON to false
	I0813 04:28:27.916415 2139542 start.go:111] hostinfo: {"hostname":"ip-172-31-30-239","uptime":51052,"bootTime":1628777856,"procs":263,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0813 04:28:27.916517 2139542 start.go:121] virtualization:  
	I0813 04:28:27.920061 2139542 out.go:177] * [false-20210813042827-2022292] minikube v1.22.0 on Ubuntu 20.04 (arm64)
	I0813 04:28:27.923500 2139542 out.go:177]   - MINIKUBE_LOCATION=12230
	I0813 04:28:27.921261 2139542 notify.go:169] Checking for updates...
	I0813 04:28:27.925760 2139542 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/kubeconfig
	I0813 04:28:27.928128 2139542 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube
	I0813 04:28:27.931141 2139542 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0813 04:28:27.931677 2139542 driver.go:335] Setting default libvirt URI to qemu:///system
	I0813 04:28:28.030206 2139542 docker.go:132] docker version: linux-20.10.8
	I0813 04:28:28.033523 2139542 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0813 04:28:28.182570 2139542 info.go:263] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:19 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2021-08-13 04:28:28.086097994 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.8.0-1041-aws OperatingSystem:Ubuntu 20.04.2 LTS OSType:linux Architecture:aarch64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8226263040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientIn
fo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker]] Warnings:<nil>}}
	I0813 04:28:28.182679 2139542 docker.go:244] overlay module found
	I0813 04:28:28.185282 2139542 out.go:177] * Using the docker driver based on user configuration
	I0813 04:28:28.185307 2139542 start.go:278] selected driver: docker
	I0813 04:28:28.185313 2139542 start.go:751] validating driver "docker" against <nil>
	I0813 04:28:28.185330 2139542 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0813 04:28:28.185518 2139542 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0813 04:28:28.185569 2139542 out.go:242] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0813 04:28:28.188045 2139542 out.go:177]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0813 04:28:28.190825 2139542 out.go:177] 
	W0813 04:28:28.191021 2139542 out.go:242] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0813 04:28:28.193107 2139542 out.go:177] 

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "false-20210813042827-2022292" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p false-20210813042827-2022292
--- PASS: TestNetworkPlugins/group/false (0.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (144.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-20210813043048-2022292 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.14.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-20210813043048-2022292 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.14.0: (2m24.451941532s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (144.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (94.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-20210813043133-2022292 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0
E0813 04:32:01.447498 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.crt: no such file or directory
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-20210813043133-2022292 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0: (1m34.30202291s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (94.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context no-preload-20210813043133-2022292 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [183f28ae-b76e-49ba-b491-c1b049f2ebeb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [183f28ae-b76e-49ba-b491-c1b049f2ebeb] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:169: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.038659698s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context no-preload-20210813043133-2022292 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context old-k8s-version-20210813043048-2022292 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [9289d951-fbef-11eb-a893-0242577d9e8b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [9289d951-fbef-11eb-a893-0242577d9e8b] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:169: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.018717449s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context old-k8s-version-20210813043048-2022292 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-20210813043133-2022292 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context no-preload-20210813043133-2022292 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (20.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-20210813043133-2022292 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-20210813043133-2022292 --alsologtostderr -v=3: (20.226235488s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (20.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-20210813043048-2022292 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context old-k8s-version-20210813043048-2022292 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (20.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-20210813043048-2022292 --alsologtostderr -v=3
E0813 04:33:29.758875 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813035500-2022292/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-20210813043048-2022292 --alsologtostderr -v=3: (20.237004475s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (20.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-20210813043133-2022292 -n no-preload-20210813043133-2022292
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-20210813043133-2022292 -n no-preload-20210813043133-2022292: exit status 7 (89.986177ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-20210813043133-2022292 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (368.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-20210813043133-2022292 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-20210813043133-2022292 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0: (6m8.109559089s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-20210813043133-2022292 -n no-preload-20210813043133-2022292
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (368.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-20210813043048-2022292 -n old-k8s-version-20210813043048-2022292
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-20210813043048-2022292 -n old-k8s-version-20210813043048-2022292: exit status 7 (111.109874ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-20210813043048-2022292 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (372.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-20210813043048-2022292 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.14.0
E0813 04:37:01.449362 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.crt: no such file or directory
E0813 04:38:29.757833 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813035500-2022292/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-20210813043048-2022292 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.14.0: (6m11.87264987s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-20210813043048-2022292 -n old-k8s-version-20210813043048-2022292
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (372.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-rg8kl" [63a08db0-96cb-4e05-a727-7de844be02d3] Running
start_stop_delete_test.go:247: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.019436652s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-rg8kl" [63a08db0-96cb-4e05-a727-7de844be02d3] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005372256s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context no-preload-20210813043133-2022292 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-5d8978d65d-xmnmv" [dd1bcb0c-fbef-11eb-96a3-0242c0a83a02] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.018266985s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-arm64 ssh -p no-preload-20210813043133-2022292 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-20210813043133-2022292 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-20210813043133-2022292 -n no-preload-20210813043133-2022292
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-20210813043133-2022292 -n no-preload-20210813043133-2022292: exit status 2 (297.04652ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-20210813043133-2022292 -n no-preload-20210813043133-2022292
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-20210813043133-2022292 -n no-preload-20210813043133-2022292: exit status 2 (295.636219ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-20210813043133-2022292 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-20210813043133-2022292 -n no-preload-20210813043133-2022292
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-20210813043133-2022292 -n no-preload-20210813043133-2022292
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-5d8978d65d-xmnmv" [dd1bcb0c-fbef-11eb-96a3-0242c0a83a02] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005229415s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context old-k8s-version-20210813043048-2022292 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (128.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-20210813044003-2022292 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.21.3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-20210813044003-2022292 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.21.3: (2m8.889867972s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (128.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-arm64 ssh -p old-k8s-version-20210813043048-2022292 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (126.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-different-port-20210813044024-2022292 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.21.3
E0813 04:41:32.804089 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813035500-2022292/client.crt: no such file or directory
E0813 04:42:01.446914 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-different-port-20210813044024-2022292 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.21.3: (2m6.627163914s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (126.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context embed-certs-20210813044003-2022292 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [592043eb-6530-4bbf-a64a-47a734745ae3] Pending
helpers_test.go:343: "busybox" [592043eb-6530-4bbf-a64a-47a734745ae3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [592043eb-6530-4bbf-a64a-47a734745ae3] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.02722767s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context embed-certs-20210813044003-2022292 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-20210813044003-2022292 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context embed-certs-20210813044003-2022292 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (20.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-20210813044003-2022292 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-20210813044003-2022292 --alsologtostderr -v=3: (20.387479979s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (20.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (7.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context default-k8s-different-port-20210813044024-2022292 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [36e2b048-7558-4809-b666-b67d888a6e1a] Pending
helpers_test.go:343: "busybox" [36e2b048-7558-4809-b666-b67d888a6e1a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [36e2b048-7558-4809-b666-b67d888a6e1a] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 7.035235346s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context default-k8s-different-port-20210813044024-2022292 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (7.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-different-port-20210813044024-2022292 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context default-k8s-different-port-20210813044024-2022292 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (20.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-different-port-20210813044024-2022292 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-different-port-20210813044024-2022292 --alsologtostderr -v=3: (20.21419244s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (20.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-20210813044003-2022292 -n embed-certs-20210813044003-2022292
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-20210813044003-2022292 -n embed-certs-20210813044003-2022292: exit status 7 (95.319388ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-20210813044003-2022292 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (378.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-20210813044003-2022292 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.21.3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-20210813044003-2022292 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.21.3: (6m17.811485723s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-20210813044003-2022292 -n embed-certs-20210813044003-2022292
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (378.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-different-port-20210813044024-2022292 -n default-k8s-different-port-20210813044024-2022292
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-different-port-20210813044024-2022292 -n default-k8s-different-port-20210813044024-2022292: exit status 7 (90.72791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-different-port-20210813044024-2022292 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (347.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-different-port-20210813044024-2022292 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.21.3
E0813 04:43:08.706973 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210813043133-2022292/client.crt: no such file or directory
E0813 04:43:08.712232 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210813043133-2022292/client.crt: no such file or directory
E0813 04:43:08.722442 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210813043133-2022292/client.crt: no such file or directory
E0813 04:43:08.743204 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210813043133-2022292/client.crt: no such file or directory
E0813 04:43:08.783386 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210813043133-2022292/client.crt: no such file or directory
E0813 04:43:08.863617 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210813043133-2022292/client.crt: no such file or directory
E0813 04:43:09.023938 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210813043133-2022292/client.crt: no such file or directory
E0813 04:43:09.344984 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210813043133-2022292/client.crt: no such file or directory
E0813 04:43:09.985602 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210813043133-2022292/client.crt: no such file or directory
E0813 04:43:11.266558 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210813043133-2022292/client.crt: no such file or directory
E0813 04:43:13.827241 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210813043133-2022292/client.crt: no such file or directory
E0813 04:43:13.936521 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210813043048-2022292/client.crt: no such file or directory
E0813 04:43:13.942411 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210813043048-2022292/client.crt: no such file or directory
E0813 04:43:13.952630 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210813043048-2022292/client.crt: no such file or directory
E0813 04:43:13.972813 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210813043048-2022292/client.crt: no such file or directory
E0813 04:43:14.013064 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210813043048-2022292/client.crt: no such file or directory
E0813 04:43:14.093320 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210813043048-2022292/client.crt: no such file or directory
E0813 04:43:14.253535 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210813043048-2022292/client.crt: no such file or directory
E0813 04:43:14.574009 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210813043048-2022292/client.crt: no such file or directory
E0813 04:43:15.214907 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210813043048-2022292/client.crt: no such file or directory
E0813 04:43:16.495076 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210813043048-2022292/client.crt: no such file or directory
E0813 04:43:18.947748 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210813043133-2022292/client.crt: no such file or directory
E0813 04:43:19.055828 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210813043048-2022292/client.crt: no such file or directory
E0813 04:43:24.176142 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210813043048-2022292/client.crt: no such file or directory
E0813 04:43:29.188223 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210813043133-2022292/client.crt: no such file or directory
E0813 04:43:29.758827 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813035500-2022292/client.crt: no such file or directory
E0813 04:43:34.416346 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210813043048-2022292/client.crt: no such file or directory
E0813 04:43:49.668422 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210813043133-2022292/client.crt: no such file or directory
E0813 04:43:54.896513 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210813043048-2022292/client.crt: no such file or directory
E0813 04:44:30.629325 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210813043133-2022292/client.crt: no such file or directory
E0813 04:44:35.857609 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210813043048-2022292/client.crt: no such file or directory
E0813 04:45:52.550126 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210813043133-2022292/client.crt: no such file or directory
E0813 04:45:57.777971 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210813043048-2022292/client.crt: no such file or directory
E0813 04:46:44.506593 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.crt: no such file or directory
E0813 04:47:01.447233 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.crt: no such file or directory
E0813 04:48:08.712389 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210813043133-2022292/client.crt: no such file or directory
E0813 04:48:13.935967 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210813043048-2022292/client.crt: no such file or directory
E0813 04:48:29.758325 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813035500-2022292/client.crt: no such file or directory
E0813 04:48:36.390363 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210813043133-2022292/client.crt: no such file or directory
E0813 04:48:41.618671 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210813043048-2022292/client.crt: no such file or directory
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-different-port-20210813044024-2022292 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.21.3: (5m46.661036851s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-different-port-20210813044024-2022292 -n default-k8s-different-port-20210813044024-2022292
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (347.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-zlzld" [cec6d666-354c-4e26-b3ac-d7d361e2520a] Running
start_stop_delete_test.go:247: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.064473908s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-zlzld" [cec6d666-354c-4e26-b3ac-d7d361e2520a] Running
start_stop_delete_test.go:260: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010794473s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context default-k8s-different-port-20210813044024-2022292 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-arm64 ssh -p default-k8s-different-port-20210813044024-2022292 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (2.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-different-port-20210813044024-2022292 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-different-port-20210813044024-2022292 -n default-k8s-different-port-20210813044024-2022292
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-different-port-20210813044024-2022292 -n default-k8s-different-port-20210813044024-2022292: exit status 2 (308.122133ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-different-port-20210813044024-2022292 -n default-k8s-different-port-20210813044024-2022292
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-different-port-20210813044024-2022292 -n default-k8s-different-port-20210813044024-2022292: exit status 2 (311.816748ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-different-port-20210813044024-2022292 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-different-port-20210813044024-2022292 -n default-k8s-different-port-20210813044024-2022292

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-different-port-20210813044024-2022292 -n default-k8s-different-port-20210813044024-2022292
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Pause (2.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-2nb5m" [ed9286fb-7aff-4bf6-9bc3-8c8b6954e3d5] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.032379608s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (74.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-20210813044903-2022292 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-20210813044903-2022292 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0: (1m14.711458784s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (74.71s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (8.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-2nb5m" [ed9286fb-7aff-4bf6-9bc3-8c8b6954e3d5] Running
start_stop_delete_test.go:260: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006963729s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context embed-certs-20210813044003-2022292 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:264: (dbg) Done: kubectl --context embed-certs-20210813044003-2022292 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (3.600574144s)
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (8.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-arm64 ssh -p embed-certs-20210813044003-2022292 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.68s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-20210813044003-2022292 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-20210813044003-2022292 -n embed-certs-20210813044003-2022292
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-20210813044003-2022292 -n embed-certs-20210813044003-2022292: exit status 2 (294.022541ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-20210813044003-2022292 -n embed-certs-20210813044003-2022292
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-20210813044003-2022292 -n embed-certs-20210813044003-2022292: exit status 2 (289.946407ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-20210813044003-2022292 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-20210813044003-2022292 -n embed-certs-20210813044003-2022292
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-20210813044003-2022292 -n embed-certs-20210813044003-2022292
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (140.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p auto-20210813042827-2022292 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p auto-20210813042827-2022292 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=containerd: (2m20.228435908s)
--- PASS: TestNetworkPlugins/group/auto/Start (140.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-20210813044903-2022292 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:178: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-20210813044903-2022292 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.520186716s)
start_stop_delete_test.go:184: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (20.62s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-20210813044903-2022292 --alsologtostderr -v=3
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-20210813044903-2022292 --alsologtostderr -v=3: (20.622439587s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (20.62s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-20210813044903-2022292 -n newest-cni-20210813044903-2022292
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-20210813044903-2022292 -n newest-cni-20210813044903-2022292: exit status 7 (95.874752ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-20210813044903-2022292 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-20210813044903-2022292 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-20210813044903-2022292 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0: (37.500572619s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-20210813044903-2022292 -n newest-cni-20210813044903-2022292
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:246: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:257: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-arm64 ssh -p newest-cni-20210813044903-2022292 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-20210813044903-2022292 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-20210813044903-2022292 -n newest-cni-20210813044903-2022292
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-20210813044903-2022292 -n newest-cni-20210813044903-2022292: exit status 2 (291.573378ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-20210813044903-2022292 -n newest-cni-20210813044903-2022292
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-20210813044903-2022292 -n newest-cni-20210813044903-2022292: exit status 2 (296.750418ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-20210813044903-2022292 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-20210813044903-2022292 -n newest-cni-20210813044903-2022292
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-20210813044903-2022292 -n newest-cni-20210813044903-2022292
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (92.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p custom-weave-20210813042828-2022292 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p custom-weave-20210813042828-2022292 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=containerd: (1m32.769510795s)
--- PASS: TestNetworkPlugins/group/custom-weave/Start (92.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-20210813042827-2022292 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context auto-20210813042827-2022292 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-7xpr5" [49fc19dc-15e3-443d-a286-9859bcb5903f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-66fbc655d5-7xpr5" [49fc19dc-15e3-443d-a286-9859bcb5903f] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.006675547s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:162: (dbg) Run:  kubectl --context auto-20210813042827-2022292 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:181: (dbg) Run:  kubectl --context auto-20210813042827-2022292 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:231: (dbg) Run:  kubectl --context auto-20210813042827-2022292 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-weave-20210813042828-2022292 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-weave/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/NetCatPod (9.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context custom-weave-20210813042828-2022292 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-wzvsw" [9dc9bf46-f6c3-4852-b24d-746ba8bfa715] Pending
helpers_test.go:343: "netcat-66fbc655d5-wzvsw" [9dc9bf46-f6c3-4852-b24d-746ba8bfa715] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-66fbc655d5-wzvsw" [9dc9bf46-f6c3-4852-b24d-746ba8bfa715] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: app=netcat healthy within 9.020205577s
--- PASS: TestNetworkPlugins/group/custom-weave/NetCatPod (9.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (144.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-20210813042827-2022292 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E0813 05:01:43.753753 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210813042827-2022292/client.crt: no such file or directory
E0813 05:02:01.447486 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.crt: no such file or directory
E0813 05:02:11.435402 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/auto-20210813042827-2022292/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-20210813042827-2022292 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (2m24.657474825s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (144.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (124.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-20210813042827-2022292 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=containerd
E0813 05:02:31.431297 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/default-k8s-different-port-20210813044024-2022292/client.crt: no such file or directory
E0813 05:02:58.391377 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813042828-2022292/client.crt: no such file or directory
E0813 05:03:08.707421 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/no-preload-20210813043133-2022292/client.crt: no such file or directory
E0813 05:03:13.936781 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/old-k8s-version-20210813043048-2022292/client.crt: no such file or directory
E0813 05:03:24.507175 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/addons-20210813032940-2022292/client.crt: no such file or directory
E0813 05:03:26.073077 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/custom-weave-20210813042828-2022292/client.crt: no such file or directory
E0813 05:03:29.758866 2022292 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-arm64-docker-containerd-12230-2020539-1c76ff5cea01605c2d985c010644edf1e689d34b/.minikube/profiles/functional-20210813035500-2022292/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-20210813042827-2022292 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=containerd: (2m4.673621153s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (124.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-20210813042827-2022292 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context enable-default-cni-20210813042827-2022292 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-bt7wc" [3a6b9e10-cf55-4085-abeb-ade7e4d5a9fe] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-66fbc655d5-bt7wc" [3a6b9e10-cf55-4085-abeb-ade7e4d5a9fe] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.00839807s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210813042827-2022292 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:181: (dbg) Run:  kubectl --context enable-default-cni-20210813042827-2022292 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:231: (dbg) Run:  kubectl --context enable-default-cni-20210813042827-2022292 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (106.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-20210813042827-2022292 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p bridge-20210813042827-2022292 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=containerd: (1m46.669379833s)
--- PASS: TestNetworkPlugins/group/bridge/Start (106.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:343: "kindnet-wkbc9" [e24fa322-2e77-495b-a4b3-f8700bc814fe] Running
net_test.go:106: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.028943315s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-20210813042827-2022292 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context kindnet-20210813042827-2022292 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-56zhd" [c6e2ce68-eda3-432c-b342-fac9c773a70e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-66fbc655d5-56zhd" [c6e2ce68-eda3-432c-b342-fac9c773a70e] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.005138741s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:162: (dbg) Run:  kubectl --context kindnet-20210813042827-2022292 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:181: (dbg) Run:  kubectl --context kindnet-20210813042827-2022292 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:231: (dbg) Run:  kubectl --context kindnet-20210813042827-2022292 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-20210813042827-2022292 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context bridge-20210813042827-2022292 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-qfk29" [4496cbbf-4d51-4c59-8e24-a43ef1e38bbe] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-66fbc655d5-qfk29" [4496cbbf-4d51-4c59-8e24-a43ef1e38bbe] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004913674s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:162: (dbg) Run:  kubectl --context bridge-20210813042827-2022292 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:181: (dbg) Run:  kubectl --context bridge-20210813042827-2022292 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:231: (dbg) Run:  kubectl --context bridge-20210813042827-2022292 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (30/252)

x
+
TestDownloadOnly/v1.14.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.14.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.14.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.14.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.21.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.21.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.21.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (13.85s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:226: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-20210813032926-2022292 --force --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:226: (dbg) Done: out/minikube-linux-arm64 start --download-only -p download-docker-20210813032926-2022292 --force --alsologtostderr --driver=docker  --container-runtime=containerd: (13.154218207s)
aaa_download_only_test.go:238: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-20210813032926-2022292" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-20210813032926-2022292
--- SKIP: TestDownloadOnlyKic (13.85s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:398: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:35: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:46: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:115: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:188: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1541: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:467: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:527: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:39: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestPreload (0s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:36: skipping TestPreload - not yet supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestPreload (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:43: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:43: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:91: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-20210813044024-2022292" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-20210813044024-2022292
--- SKIP: TestStartStop/group/disable-driver-mounts (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:88: Skipping the test as containerd container runtimes requires CNI
helpers_test.go:176: Cleaning up "kubenet-20210813042827-2022292" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-20210813042827-2022292
--- SKIP: TestNetworkPlugins/group/kubenet (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:76: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:176: Cleaning up "flannel-20210813042827-2022292" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p flannel-20210813042827-2022292
--- SKIP: TestNetworkPlugins/group/flannel (0.37s)

                                                
                                    
Copied to clipboard