Test Report: Docker_Linux_crio_arm64 19711

                    
                      f2dddbc2cec1d99a0bb3d71de73f46a47f499a62:2024-09-27:36389
                    
                

Test fail (4/327)

Order failed test Duration
33 TestAddons/parallel/Registry 73.78
34 TestAddons/parallel/Ingress 152.19
36 TestAddons/parallel/MetricsServer 357.8
301 TestStartStop/group/old-k8s-version/serial/SecondStart 383.01
x
+
TestAddons/parallel/Registry (73.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 2.836291ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-7997r" [06852bd1-3230-4615-b6a1-8834e426e02d] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004280025s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-ld2gg" [44a3013c-bbfc-4d08-9ed4-a5160422cdf0] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003674542s
addons_test.go:338: (dbg) Run:  kubectl --context addons-220192 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-220192 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-220192 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.109091552s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-220192 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-arm64 -p addons-220192 ip
2024/09/27 00:46:27 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-arm64 -p addons-220192 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-220192
helpers_test.go:235: (dbg) docker inspect addons-220192:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d422e214370b2c42e3f8fefdb034ec6a32b66ac61da65610a7675682c1d93c9b",
	        "Created": "2024-09-27T00:34:02.077711994Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 560408,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-27T00:34:02.205411751Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:62002f6a97ad1f6cd4117c29b1c488a6bf3b6255c8231f0d600b1bc7ba1bcfd6",
	        "ResolvConfPath": "/var/lib/docker/containers/d422e214370b2c42e3f8fefdb034ec6a32b66ac61da65610a7675682c1d93c9b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d422e214370b2c42e3f8fefdb034ec6a32b66ac61da65610a7675682c1d93c9b/hostname",
	        "HostsPath": "/var/lib/docker/containers/d422e214370b2c42e3f8fefdb034ec6a32b66ac61da65610a7675682c1d93c9b/hosts",
	        "LogPath": "/var/lib/docker/containers/d422e214370b2c42e3f8fefdb034ec6a32b66ac61da65610a7675682c1d93c9b/d422e214370b2c42e3f8fefdb034ec6a32b66ac61da65610a7675682c1d93c9b-json.log",
	        "Name": "/addons-220192",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-220192:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-220192",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0793fd05507618b00e1cf7c9b3149e5680c33ad6255fa927fc31c2a001bb624a-init/diff:/var/lib/docker/overlay2/e55adca0cb8a4469e5ee8e2f29139ff0ae0fed3b714ff629d2562144f224236f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0793fd05507618b00e1cf7c9b3149e5680c33ad6255fa927fc31c2a001bb624a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0793fd05507618b00e1cf7c9b3149e5680c33ad6255fa927fc31c2a001bb624a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0793fd05507618b00e1cf7c9b3149e5680c33ad6255fa927fc31c2a001bb624a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-220192",
	                "Source": "/var/lib/docker/volumes/addons-220192/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-220192",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-220192",
	                "name.minikube.sigs.k8s.io": "addons-220192",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eb69f56da587fa8de40f3ac5f3f88f4566733f9673b58beb1d3e2d5b04e449e4",
	            "SandboxKey": "/var/run/docker/netns/eb69f56da587",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33501"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33502"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33505"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33503"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33504"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-220192": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "17b152e28b32de3f994213bf60b3fa21cfee26682153643fc3b71f12f405c393",
	                    "EndpointID": "8d6fe335b06a81d7595798770e72c7f67d0e3bb540d515a162969aad9ac12807",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-220192",
	                        "d422e214370b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-220192 -n addons-220192
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-220192 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-220192 logs -n 25: (1.633089921s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-005398   | jenkins | v1.34.0 | 27 Sep 24 00:33 UTC |                     |
	|         | -p download-only-005398              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 27 Sep 24 00:33 UTC | 27 Sep 24 00:33 UTC |
	| delete  | -p download-only-005398              | download-only-005398   | jenkins | v1.34.0 | 27 Sep 24 00:33 UTC | 27 Sep 24 00:33 UTC |
	| start   | -o=json --download-only              | download-only-763965   | jenkins | v1.34.0 | 27 Sep 24 00:33 UTC |                     |
	|         | -p download-only-763965              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 27 Sep 24 00:33 UTC | 27 Sep 24 00:33 UTC |
	| delete  | -p download-only-763965              | download-only-763965   | jenkins | v1.34.0 | 27 Sep 24 00:33 UTC | 27 Sep 24 00:33 UTC |
	| delete  | -p download-only-005398              | download-only-005398   | jenkins | v1.34.0 | 27 Sep 24 00:33 UTC | 27 Sep 24 00:33 UTC |
	| delete  | -p download-only-763965              | download-only-763965   | jenkins | v1.34.0 | 27 Sep 24 00:33 UTC | 27 Sep 24 00:33 UTC |
	| start   | --download-only -p                   | download-docker-575684 | jenkins | v1.34.0 | 27 Sep 24 00:33 UTC |                     |
	|         | download-docker-575684               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p download-docker-575684            | download-docker-575684 | jenkins | v1.34.0 | 27 Sep 24 00:33 UTC | 27 Sep 24 00:33 UTC |
	| start   | --download-only -p                   | binary-mirror-878606   | jenkins | v1.34.0 | 27 Sep 24 00:33 UTC |                     |
	|         | binary-mirror-878606                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:39419               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-878606              | binary-mirror-878606   | jenkins | v1.34.0 | 27 Sep 24 00:33 UTC | 27 Sep 24 00:33 UTC |
	| addons  | disable dashboard -p                 | addons-220192          | jenkins | v1.34.0 | 27 Sep 24 00:33 UTC |                     |
	|         | addons-220192                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-220192          | jenkins | v1.34.0 | 27 Sep 24 00:33 UTC |                     |
	|         | addons-220192                        |                        |         |         |                     |                     |
	| start   | -p addons-220192 --wait=true         | addons-220192          | jenkins | v1.34.0 | 27 Sep 24 00:33 UTC | 27 Sep 24 00:37 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-220192          | jenkins | v1.34.0 | 27 Sep 24 00:45 UTC | 27 Sep 24 00:45 UTC |
	|         | -p addons-220192                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-220192 addons disable         | addons-220192          | jenkins | v1.34.0 | 27 Sep 24 00:45 UTC | 27 Sep 24 00:45 UTC |
	|         | headlamp --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-220192 addons                 | addons-220192          | jenkins | v1.34.0 | 27 Sep 24 00:46 UTC | 27 Sep 24 00:46 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-220192 addons                 | addons-220192          | jenkins | v1.34.0 | 27 Sep 24 00:46 UTC | 27 Sep 24 00:46 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| ip      | addons-220192 ip                     | addons-220192          | jenkins | v1.34.0 | 27 Sep 24 00:46 UTC | 27 Sep 24 00:46 UTC |
	| addons  | addons-220192 addons disable         | addons-220192          | jenkins | v1.34.0 | 27 Sep 24 00:46 UTC | 27 Sep 24 00:46 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 00:33:38
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 00:33:38.065367  559927 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:33:38.065662  559927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:33:38.065684  559927 out.go:358] Setting ErrFile to fd 2...
	I0927 00:33:38.065691  559927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:33:38.066134  559927 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-553751/.minikube/bin
	I0927 00:33:38.067015  559927 out.go:352] Setting JSON to false
	I0927 00:33:38.067932  559927 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":15361,"bootTime":1727381857,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0927 00:33:38.068011  559927 start.go:139] virtualization:  
	I0927 00:33:38.070248  559927 out.go:177] * [addons-220192] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0927 00:33:38.071946  559927 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 00:33:38.071998  559927 notify.go:220] Checking for updates...
	I0927 00:33:38.075858  559927 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 00:33:38.077758  559927 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-553751/kubeconfig
	I0927 00:33:38.079450  559927 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-553751/.minikube
	I0927 00:33:38.081273  559927 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0927 00:33:38.082746  559927 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 00:33:38.084258  559927 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 00:33:38.110806  559927 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0927 00:33:38.110932  559927 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 00:33:38.175583  559927 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-27 00:33:38.165974566 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 00:33:38.175704  559927 docker.go:318] overlay module found
	I0927 00:33:38.178529  559927 out.go:177] * Using the docker driver based on user configuration
	I0927 00:33:38.179548  559927 start.go:297] selected driver: docker
	I0927 00:33:38.179564  559927 start.go:901] validating driver "docker" against <nil>
	I0927 00:33:38.179577  559927 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 00:33:38.180219  559927 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 00:33:38.238992  559927 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-27 00:33:38.229229626 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 00:33:38.239202  559927 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 00:33:38.239427  559927 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 00:33:38.240920  559927 out.go:177] * Using Docker driver with root privileges
	I0927 00:33:38.242287  559927 cni.go:84] Creating CNI manager for ""
	I0927 00:33:38.242357  559927 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0927 00:33:38.242365  559927 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0927 00:33:38.242444  559927 start.go:340] cluster config:
	{Name:addons-220192 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-220192 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:33:38.244624  559927 out.go:177] * Starting "addons-220192" primary control-plane node in "addons-220192" cluster
	I0927 00:33:38.245946  559927 cache.go:121] Beginning downloading kic base image for docker with crio
	I0927 00:33:38.247419  559927 out.go:177] * Pulling base image v0.0.45-1727108449-19696 ...
	I0927 00:33:38.248793  559927 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 00:33:38.248850  559927 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19711-553751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I0927 00:33:38.248878  559927 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0927 00:33:38.248883  559927 cache.go:56] Caching tarball of preloaded images
	I0927 00:33:38.248983  559927 preload.go:172] Found /home/jenkins/minikube-integration/19711-553751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0927 00:33:38.248995  559927 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 00:33:38.249334  559927 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/config.json ...
	I0927 00:33:38.249364  559927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/config.json: {Name:mkb4ce982f7db05f161e177b73decd3cb5d108a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:33:38.262886  559927 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0927 00:33:38.263010  559927 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory
	I0927 00:33:38.263042  559927 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory, skipping pull
	I0927 00:33:38.263053  559927 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 exists in cache, skipping pull
	I0927 00:33:38.263061  559927 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 as a tarball
	I0927 00:33:38.263070  559927 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 from local cache
	I0927 00:33:55.153743  559927 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 from cached tarball
	I0927 00:33:55.153786  559927 cache.go:194] Successfully downloaded all kic artifacts
	I0927 00:33:55.153817  559927 start.go:360] acquireMachinesLock for addons-220192: {Name:mk630666e0be44a920ddd2e3008b4121da78b597 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 00:33:55.153958  559927 start.go:364] duration metric: took 117.166µs to acquireMachinesLock for "addons-220192"
	I0927 00:33:55.153999  559927 start.go:93] Provisioning new machine with config: &{Name:addons-220192 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-220192 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 00:33:55.154087  559927 start.go:125] createHost starting for "" (driver="docker")
	I0927 00:33:55.156404  559927 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0927 00:33:55.156691  559927 start.go:159] libmachine.API.Create for "addons-220192" (driver="docker")
	I0927 00:33:55.156728  559927 client.go:168] LocalClient.Create starting
	I0927 00:33:55.156866  559927 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19711-553751/.minikube/certs/ca.pem
	I0927 00:33:55.366096  559927 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19711-553751/.minikube/certs/cert.pem
	I0927 00:33:55.869561  559927 cli_runner.go:164] Run: docker network inspect addons-220192 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0927 00:33:55.885619  559927 cli_runner.go:211] docker network inspect addons-220192 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0927 00:33:55.885722  559927 network_create.go:284] running [docker network inspect addons-220192] to gather additional debugging logs...
	I0927 00:33:55.885746  559927 cli_runner.go:164] Run: docker network inspect addons-220192
	W0927 00:33:55.900334  559927 cli_runner.go:211] docker network inspect addons-220192 returned with exit code 1
	I0927 00:33:55.900373  559927 network_create.go:287] error running [docker network inspect addons-220192]: docker network inspect addons-220192: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-220192 not found
	I0927 00:33:55.900388  559927 network_create.go:289] output of [docker network inspect addons-220192]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-220192 not found
	
	** /stderr **
	I0927 00:33:55.900485  559927 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0927 00:33:55.915597  559927 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001bf5250}
	I0927 00:33:55.915643  559927 network_create.go:124] attempt to create docker network addons-220192 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0927 00:33:55.915701  559927 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-220192 addons-220192
	I0927 00:33:55.980148  559927 network_create.go:108] docker network addons-220192 192.168.49.0/24 created
	I0927 00:33:55.980183  559927 kic.go:121] calculated static IP "192.168.49.2" for the "addons-220192" container
	I0927 00:33:55.980255  559927 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0927 00:33:55.992949  559927 cli_runner.go:164] Run: docker volume create addons-220192 --label name.minikube.sigs.k8s.io=addons-220192 --label created_by.minikube.sigs.k8s.io=true
	I0927 00:33:56.009754  559927 oci.go:103] Successfully created a docker volume addons-220192
	I0927 00:33:56.009852  559927 cli_runner.go:164] Run: docker run --rm --name addons-220192-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-220192 --entrypoint /usr/bin/test -v addons-220192:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -d /var/lib
	I0927 00:33:57.993052  559927 cli_runner.go:217] Completed: docker run --rm --name addons-220192-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-220192 --entrypoint /usr/bin/test -v addons-220192:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -d /var/lib: (1.983158106s)
	I0927 00:33:57.993080  559927 oci.go:107] Successfully prepared a docker volume addons-220192
	I0927 00:33:57.993109  559927 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 00:33:57.993128  559927 kic.go:194] Starting extracting preloaded images to volume ...
	I0927 00:33:57.993194  559927 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19711-553751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-220192:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -I lz4 -xf /preloaded.tar -C /extractDir
	I0927 00:34:02.014141  559927 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19711-553751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-220192:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -I lz4 -xf /preloaded.tar -C /extractDir: (4.020882938s)
	I0927 00:34:02.014176  559927 kic.go:203] duration metric: took 4.021043549s to extract preloaded images to volume ...
	W0927 00:34:02.014327  559927 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0927 00:34:02.014451  559927 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0927 00:34:02.064494  559927 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-220192 --name addons-220192 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-220192 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-220192 --network addons-220192 --ip 192.168.49.2 --volume addons-220192:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21
	I0927 00:34:02.388520  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Running}}
	I0927 00:34:02.409325  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:02.431602  559927 cli_runner.go:164] Run: docker exec addons-220192 stat /var/lib/dpkg/alternatives/iptables
	I0927 00:34:02.480602  559927 oci.go:144] the created container "addons-220192" has a running status.
	I0927 00:34:02.480633  559927 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa...
	I0927 00:34:03.617795  559927 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0927 00:34:03.637260  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:03.653027  559927 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0927 00:34:03.653052  559927 kic_runner.go:114] Args: [docker exec --privileged addons-220192 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0927 00:34:03.700155  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:03.717668  559927 machine.go:93] provisionDockerMachine start ...
	I0927 00:34:03.717764  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:03.733546  559927 main.go:141] libmachine: Using SSH client type: native
	I0927 00:34:03.733814  559927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33501 <nil> <nil>}
	I0927 00:34:03.733823  559927 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 00:34:03.862293  559927 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-220192
	
	I0927 00:34:03.862317  559927 ubuntu.go:169] provisioning hostname "addons-220192"
	I0927 00:34:03.862386  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:03.879096  559927 main.go:141] libmachine: Using SSH client type: native
	I0927 00:34:03.879355  559927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33501 <nil> <nil>}
	I0927 00:34:03.879374  559927 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-220192 && echo "addons-220192" | sudo tee /etc/hostname
	I0927 00:34:04.019276  559927 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-220192
	
	I0927 00:34:04.019405  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:04.036545  559927 main.go:141] libmachine: Using SSH client type: native
	I0927 00:34:04.036798  559927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33501 <nil> <nil>}
	I0927 00:34:04.036821  559927 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-220192' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-220192/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-220192' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 00:34:04.162591  559927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 00:34:04.162681  559927 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19711-553751/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-553751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-553751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-553751/.minikube}
	I0927 00:34:04.162739  559927 ubuntu.go:177] setting up certificates
	I0927 00:34:04.162769  559927 provision.go:84] configureAuth start
	I0927 00:34:04.162865  559927 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-220192
	I0927 00:34:04.179414  559927 provision.go:143] copyHostCerts
	I0927 00:34:04.179501  559927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-553751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-553751/.minikube/ca.pem (1078 bytes)
	I0927 00:34:04.179628  559927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-553751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-553751/.minikube/cert.pem (1123 bytes)
	I0927 00:34:04.179689  559927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-553751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-553751/.minikube/key.pem (1675 bytes)
	I0927 00:34:04.179747  559927 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-553751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-553751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-553751/.minikube/certs/ca-key.pem org=jenkins.addons-220192 san=[127.0.0.1 192.168.49.2 addons-220192 localhost minikube]
	I0927 00:34:04.940382  559927 provision.go:177] copyRemoteCerts
	I0927 00:34:04.940458  559927 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 00:34:04.940508  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:04.963981  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:05.060102  559927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 00:34:05.084207  559927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 00:34:05.107968  559927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0927 00:34:05.131460  559927 provision.go:87] duration metric: took 968.661896ms to configureAuth
	I0927 00:34:05.131489  559927 ubuntu.go:193] setting minikube options for container-runtime
	I0927 00:34:05.131682  559927 config.go:182] Loaded profile config "addons-220192": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:34:05.131795  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:05.148107  559927 main.go:141] libmachine: Using SSH client type: native
	I0927 00:34:05.148363  559927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33501 <nil> <nil>}
	I0927 00:34:05.148380  559927 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 00:34:05.367545  559927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 00:34:05.367569  559927 machine.go:96] duration metric: took 1.649879839s to provisionDockerMachine
	I0927 00:34:05.367581  559927 client.go:171] duration metric: took 10.210842557s to LocalClient.Create
	I0927 00:34:05.367593  559927 start.go:167] duration metric: took 10.210902338s to libmachine.API.Create "addons-220192"
	I0927 00:34:05.367601  559927 start.go:293] postStartSetup for "addons-220192" (driver="docker")
	I0927 00:34:05.367612  559927 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 00:34:05.367677  559927 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 00:34:05.367727  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:05.385055  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:05.479714  559927 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 00:34:05.483003  559927 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0927 00:34:05.483039  559927 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0927 00:34:05.483050  559927 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0927 00:34:05.483057  559927 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0927 00:34:05.483067  559927 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-553751/.minikube/addons for local assets ...
	I0927 00:34:05.483137  559927 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-553751/.minikube/files for local assets ...
	I0927 00:34:05.483165  559927 start.go:296] duration metric: took 115.558426ms for postStartSetup
	I0927 00:34:05.483490  559927 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-220192
	I0927 00:34:05.499440  559927 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/config.json ...
	I0927 00:34:05.499737  559927 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 00:34:05.499789  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:05.515159  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:05.603311  559927 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0927 00:34:05.607625  559927 start.go:128] duration metric: took 10.453518321s to createHost
	I0927 00:34:05.607654  559927 start.go:83] releasing machines lock for "addons-220192", held for 10.453681394s
	I0927 00:34:05.607730  559927 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-220192
	I0927 00:34:05.623821  559927 ssh_runner.go:195] Run: cat /version.json
	I0927 00:34:05.623878  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:05.623938  559927 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 00:34:05.624015  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:05.641153  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:05.648618  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:05.857953  559927 ssh_runner.go:195] Run: systemctl --version
	I0927 00:34:05.862287  559927 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 00:34:06.008454  559927 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0927 00:34:06.013211  559927 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 00:34:06.035213  559927 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0927 00:34:06.035367  559927 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 00:34:06.065128  559927 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0927 00:34:06.065196  559927 start.go:495] detecting cgroup driver to use...
	I0927 00:34:06.065243  559927 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0927 00:34:06.065323  559927 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 00:34:06.081824  559927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 00:34:06.093535  559927 docker.go:217] disabling cri-docker service (if available) ...
	I0927 00:34:06.093645  559927 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 00:34:06.108200  559927 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 00:34:06.123249  559927 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 00:34:06.207618  559927 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 00:34:06.299470  559927 docker.go:233] disabling docker service ...
	I0927 00:34:06.299551  559927 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 00:34:06.320068  559927 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 00:34:06.331991  559927 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 00:34:06.415970  559927 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 00:34:06.517135  559927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 00:34:06.528773  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 00:34:06.545373  559927 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 00:34:06.545478  559927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:34:06.555271  559927 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 00:34:06.555361  559927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:34:06.565035  559927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:34:06.574675  559927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:34:06.584230  559927 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 00:34:06.593099  559927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:34:06.602922  559927 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:34:06.618358  559927 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:34:06.628225  559927 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 00:34:06.636420  559927 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 00:34:06.644684  559927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:34:06.724669  559927 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 00:34:06.839759  559927 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 00:34:06.839877  559927 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 00:34:06.843772  559927 start.go:563] Will wait 60s for crictl version
	I0927 00:34:06.843909  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:34:06.847728  559927 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 00:34:06.886811  559927 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0927 00:34:06.886963  559927 ssh_runner.go:195] Run: crio --version
	I0927 00:34:06.923924  559927 ssh_runner.go:195] Run: crio --version
	I0927 00:34:06.961630  559927 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0927 00:34:06.964039  559927 cli_runner.go:164] Run: docker network inspect addons-220192 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0927 00:34:06.979344  559927 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0927 00:34:06.982885  559927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 00:34:06.993886  559927 kubeadm.go:883] updating cluster {Name:addons-220192 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-220192 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 00:34:06.994013  559927 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 00:34:06.994079  559927 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 00:34:07.065666  559927 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 00:34:07.065693  559927 crio.go:433] Images already preloaded, skipping extraction
	I0927 00:34:07.065759  559927 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 00:34:07.103089  559927 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 00:34:07.103111  559927 cache_images.go:84] Images are preloaded, skipping loading
	I0927 00:34:07.103119  559927 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0927 00:34:07.103212  559927 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-220192 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-220192 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 00:34:07.103294  559927 ssh_runner.go:195] Run: crio config
	I0927 00:34:07.184942  559927 cni.go:84] Creating CNI manager for ""
	I0927 00:34:07.185003  559927 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0927 00:34:07.185030  559927 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 00:34:07.185073  559927 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-220192 NodeName:addons-220192 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 00:34:07.185246  559927 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-220192"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 00:34:07.185338  559927 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 00:34:07.193935  559927 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 00:34:07.194048  559927 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 00:34:07.202460  559927 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0927 00:34:07.219678  559927 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 00:34:07.237053  559927 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0927 00:34:07.254481  559927 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0927 00:34:07.257688  559927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 00:34:07.268344  559927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:34:07.360228  559927 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 00:34:07.373741  559927 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192 for IP: 192.168.49.2
	I0927 00:34:07.373817  559927 certs.go:194] generating shared ca certs ...
	I0927 00:34:07.373850  559927 certs.go:226] acquiring lock for ca certs: {Name:mkd73b356b28d0818fea73c44481b0cb2597afbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:34:07.374052  559927 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-553751/.minikube/ca.key
	I0927 00:34:07.720680  559927 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-553751/.minikube/ca.crt ...
	I0927 00:34:07.720716  559927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-553751/.minikube/ca.crt: {Name:mkbfcd9c6c45e82aff1171fec506aac41dc5280a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:34:07.720931  559927 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-553751/.minikube/ca.key ...
	I0927 00:34:07.720946  559927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-553751/.minikube/ca.key: {Name:mk27b9aca1fe71da4c843dcf3c985bda93669b0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:34:07.721037  559927 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-553751/.minikube/proxy-client-ca.key
	I0927 00:34:09.101274  559927 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-553751/.minikube/proxy-client-ca.crt ...
	I0927 00:34:09.101305  559927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-553751/.minikube/proxy-client-ca.crt: {Name:mkdc0759b42a37859fc6068ba22254e0927be300 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:34:09.101947  559927 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-553751/.minikube/proxy-client-ca.key ...
	I0927 00:34:09.101964  559927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-553751/.minikube/proxy-client-ca.key: {Name:mke7b97bcbcb62de5f7a0ca1a1958a806a1e0ac9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:34:09.102051  559927 certs.go:256] generating profile certs ...
	I0927 00:34:09.102113  559927 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.key
	I0927 00:34:09.102130  559927 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.crt with IP's: []
	I0927 00:34:09.315290  559927 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.crt ...
	I0927 00:34:09.315324  559927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.crt: {Name:mkfff86d6c11512911cf0969854882c551536630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:34:09.315544  559927 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.key ...
	I0927 00:34:09.315558  559927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.key: {Name:mk1634c2995d45b5e8b115cffc851a552ceefda4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:34:09.315645  559927 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/apiserver.key.bb9babc9
	I0927 00:34:09.315665  559927 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/apiserver.crt.bb9babc9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0927 00:34:09.625710  559927 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/apiserver.crt.bb9babc9 ...
	I0927 00:34:09.625740  559927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/apiserver.crt.bb9babc9: {Name:mk7150966e38d5953f0ffbbca37251c426945939 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:34:09.625923  559927 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/apiserver.key.bb9babc9 ...
	I0927 00:34:09.625936  559927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/apiserver.key.bb9babc9: {Name:mk05d3eba820733b8f36b06f33f5470f331f3307 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:34:09.626021  559927 certs.go:381] copying /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/apiserver.crt.bb9babc9 -> /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/apiserver.crt
	I0927 00:34:09.626100  559927 certs.go:385] copying /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/apiserver.key.bb9babc9 -> /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/apiserver.key
	I0927 00:34:09.626154  559927 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/proxy-client.key
	I0927 00:34:09.626175  559927 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/proxy-client.crt with IP's: []
	I0927 00:34:10.552918  559927 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/proxy-client.crt ...
	I0927 00:34:10.552956  559927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/proxy-client.crt: {Name:mkf5cd4cf9e9eaebbd419908d7e57768395a038f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:34:10.553141  559927 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/proxy-client.key ...
	I0927 00:34:10.553160  559927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/proxy-client.key: {Name:mk5fec058a0a902adcdcf9089d18b3d6355794eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:34:10.553344  559927 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-553751/.minikube/certs/ca-key.pem (1679 bytes)
	I0927 00:34:10.553391  559927 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-553751/.minikube/certs/ca.pem (1078 bytes)
	I0927 00:34:10.553423  559927 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-553751/.minikube/certs/cert.pem (1123 bytes)
	I0927 00:34:10.553451  559927 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-553751/.minikube/certs/key.pem (1675 bytes)
	I0927 00:34:10.554112  559927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 00:34:10.580588  559927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0927 00:34:10.603802  559927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 00:34:10.628713  559927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0927 00:34:10.653540  559927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0927 00:34:10.677124  559927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0927 00:34:10.701503  559927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 00:34:10.724622  559927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 00:34:10.748189  559927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 00:34:10.772084  559927 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 00:34:10.789400  559927 ssh_runner.go:195] Run: openssl version
	I0927 00:34:10.794925  559927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 00:34:10.804621  559927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:34:10.808078  559927 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:34 /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:34:10.808143  559927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:34:10.814650  559927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 00:34:10.823722  559927 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 00:34:10.826819  559927 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0927 00:34:10.826870  559927 kubeadm.go:392] StartCluster: {Name:addons-220192 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-220192 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:34:10.826950  559927 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 00:34:10.827020  559927 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 00:34:10.866663  559927 cri.go:89] found id: ""
	I0927 00:34:10.866760  559927 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 00:34:10.875415  559927 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 00:34:10.883762  559927 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0927 00:34:10.883827  559927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 00:34:10.893704  559927 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 00:34:10.893724  559927 kubeadm.go:157] found existing configuration files:
	
	I0927 00:34:10.893774  559927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 00:34:10.902339  559927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 00:34:10.902423  559927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 00:34:10.910637  559927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 00:34:10.919187  559927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 00:34:10.919251  559927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 00:34:10.927057  559927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 00:34:10.935278  559927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 00:34:10.935346  559927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 00:34:10.943456  559927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 00:34:10.951694  559927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 00:34:10.951762  559927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 00:34:10.959916  559927 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0927 00:34:10.995459  559927 kubeadm.go:310] W0927 00:34:10.994701    1188 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 00:34:10.996690  559927 kubeadm.go:310] W0927 00:34:10.996201    1188 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 00:34:11.020983  559927 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0927 00:34:11.080895  559927 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 00:34:29.763728  559927 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0927 00:34:29.763788  559927 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 00:34:29.763877  559927 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0927 00:34:29.763937  559927 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0927 00:34:29.764020  559927 kubeadm.go:310] OS: Linux
	I0927 00:34:29.764081  559927 kubeadm.go:310] CGROUPS_CPU: enabled
	I0927 00:34:29.764137  559927 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0927 00:34:29.764217  559927 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0927 00:34:29.764274  559927 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0927 00:34:29.764324  559927 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0927 00:34:29.764406  559927 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0927 00:34:29.764467  559927 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0927 00:34:29.764528  559927 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0927 00:34:29.764588  559927 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0927 00:34:29.764661  559927 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 00:34:29.764772  559927 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 00:34:29.764867  559927 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0927 00:34:29.764931  559927 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 00:34:29.766962  559927 out.go:235]   - Generating certificates and keys ...
	I0927 00:34:29.767068  559927 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 00:34:29.767153  559927 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 00:34:29.767232  559927 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0927 00:34:29.767300  559927 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0927 00:34:29.767387  559927 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0927 00:34:29.767453  559927 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0927 00:34:29.767527  559927 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0927 00:34:29.767659  559927 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-220192 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0927 00:34:29.767722  559927 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0927 00:34:29.767855  559927 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-220192 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0927 00:34:29.767928  559927 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0927 00:34:29.768001  559927 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0927 00:34:29.768051  559927 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0927 00:34:29.768131  559927 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 00:34:29.768206  559927 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 00:34:29.768283  559927 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0927 00:34:29.768353  559927 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 00:34:29.768436  559927 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 00:34:29.768511  559927 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 00:34:29.768606  559927 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 00:34:29.768699  559927 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 00:34:29.769783  559927 out.go:235]   - Booting up control plane ...
	I0927 00:34:29.769896  559927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 00:34:29.769989  559927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 00:34:29.770065  559927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 00:34:29.770172  559927 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 00:34:29.770279  559927 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 00:34:29.770329  559927 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 00:34:29.770469  559927 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0927 00:34:29.770575  559927 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0927 00:34:29.770637  559927 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.50140432s
	I0927 00:34:29.770724  559927 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0927 00:34:29.770784  559927 kubeadm.go:310] [api-check] The API server is healthy after 6.001791706s
	I0927 00:34:29.770893  559927 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0927 00:34:29.771024  559927 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0927 00:34:29.771086  559927 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0927 00:34:29.771270  559927 kubeadm.go:310] [mark-control-plane] Marking the node addons-220192 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0927 00:34:29.771331  559927 kubeadm.go:310] [bootstrap-token] Using token: 9ix9q6.4kz2sbtsprzpkswr
	I0927 00:34:29.773367  559927 out.go:235]   - Configuring RBAC rules ...
	I0927 00:34:29.773551  559927 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0927 00:34:29.773700  559927 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0927 00:34:29.773871  559927 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0927 00:34:29.774024  559927 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0927 00:34:29.774161  559927 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0927 00:34:29.774292  559927 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0927 00:34:29.774445  559927 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0927 00:34:29.774498  559927 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0927 00:34:29.774551  559927 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0927 00:34:29.774558  559927 kubeadm.go:310] 
	I0927 00:34:29.774618  559927 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0927 00:34:29.774626  559927 kubeadm.go:310] 
	I0927 00:34:29.774701  559927 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0927 00:34:29.774709  559927 kubeadm.go:310] 
	I0927 00:34:29.774754  559927 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0927 00:34:29.774813  559927 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0927 00:34:29.774870  559927 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0927 00:34:29.774879  559927 kubeadm.go:310] 
	I0927 00:34:29.774933  559927 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0927 00:34:29.774941  559927 kubeadm.go:310] 
	I0927 00:34:29.774988  559927 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0927 00:34:29.774996  559927 kubeadm.go:310] 
	I0927 00:34:29.775047  559927 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0927 00:34:29.775123  559927 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0927 00:34:29.775193  559927 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0927 00:34:29.775201  559927 kubeadm.go:310] 
	I0927 00:34:29.775284  559927 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0927 00:34:29.775362  559927 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0927 00:34:29.775370  559927 kubeadm.go:310] 
	I0927 00:34:29.775452  559927 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9ix9q6.4kz2sbtsprzpkswr \
	I0927 00:34:29.775556  559927 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d8dda315011cb74d53922a23f64d2f20e11a31a3286152848c02c6c9df47cdc \
	I0927 00:34:29.775579  559927 kubeadm.go:310] 	--control-plane 
	I0927 00:34:29.775584  559927 kubeadm.go:310] 
	I0927 00:34:29.775668  559927 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0927 00:34:29.775676  559927 kubeadm.go:310] 
	I0927 00:34:29.775757  559927 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9ix9q6.4kz2sbtsprzpkswr \
	I0927 00:34:29.775873  559927 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d8dda315011cb74d53922a23f64d2f20e11a31a3286152848c02c6c9df47cdc 
	I0927 00:34:29.775887  559927 cni.go:84] Creating CNI manager for ""
	I0927 00:34:29.775895  559927 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0927 00:34:29.778035  559927 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0927 00:34:29.779166  559927 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0927 00:34:29.783667  559927 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0927 00:34:29.783687  559927 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0927 00:34:29.802342  559927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0927 00:34:30.115884  559927 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 00:34:30.116099  559927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:34:30.116240  559927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-220192 minikube.k8s.io/updated_at=2024_09_27T00_34_30_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625 minikube.k8s.io/name=addons-220192 minikube.k8s.io/primary=true
	I0927 00:34:30.127679  559927 ops.go:34] apiserver oom_adj: -16
	I0927 00:34:30.288090  559927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:34:30.788920  559927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:34:31.288744  559927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:34:31.788793  559927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:34:32.288933  559927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:34:32.788947  559927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:34:33.288195  559927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:34:33.380134  559927 kubeadm.go:1113] duration metric: took 3.264113362s to wait for elevateKubeSystemPrivileges
	I0927 00:34:33.380167  559927 kubeadm.go:394] duration metric: took 22.553300472s to StartCluster
	I0927 00:34:33.380185  559927 settings.go:142] acquiring lock: {Name:mk5b1f005001018637d448709269193603885722 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:34:33.380304  559927 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19711-553751/kubeconfig
	I0927 00:34:33.380761  559927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-553751/kubeconfig: {Name:mkc30ade55bf966f83b95c0af3746bfadfd3f379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:34:33.380969  559927 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 00:34:33.381135  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0927 00:34:33.381376  559927 config.go:182] Loaded profile config "addons-220192": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:34:33.381415  559927 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0927 00:34:33.381499  559927 addons.go:69] Setting yakd=true in profile "addons-220192"
	I0927 00:34:33.381517  559927 addons.go:234] Setting addon yakd=true in "addons-220192"
	I0927 00:34:33.381542  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:33.382036  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.382470  559927 addons.go:69] Setting metrics-server=true in profile "addons-220192"
	I0927 00:34:33.382492  559927 addons.go:234] Setting addon metrics-server=true in "addons-220192"
	I0927 00:34:33.382517  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:33.382550  559927 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-220192"
	I0927 00:34:33.382568  559927 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-220192"
	I0927 00:34:33.382595  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:33.382967  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.383084  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.383406  559927 out.go:177] * Verifying Kubernetes components...
	I0927 00:34:33.388011  559927 addons.go:69] Setting registry=true in profile "addons-220192"
	I0927 00:34:33.388044  559927 addons.go:234] Setting addon registry=true in "addons-220192"
	I0927 00:34:33.388084  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:33.388540  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.388723  559927 addons.go:69] Setting cloud-spanner=true in profile "addons-220192"
	I0927 00:34:33.388755  559927 addons.go:234] Setting addon cloud-spanner=true in "addons-220192"
	I0927 00:34:33.388797  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:33.389200  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.392076  559927 addons.go:69] Setting storage-provisioner=true in profile "addons-220192"
	I0927 00:34:33.392108  559927 addons.go:234] Setting addon storage-provisioner=true in "addons-220192"
	I0927 00:34:33.392149  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:33.392954  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.395344  559927 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-220192"
	I0927 00:34:33.395417  559927 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-220192"
	I0927 00:34:33.395737  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.396386  559927 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-220192"
	I0927 00:34:33.396450  559927 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-220192"
	I0927 00:34:33.396481  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:33.396929  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.404208  559927 addons.go:69] Setting volcano=true in profile "addons-220192"
	I0927 00:34:33.404292  559927 addons.go:234] Setting addon volcano=true in "addons-220192"
	I0927 00:34:33.404344  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:33.404886  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.415902  559927 addons.go:69] Setting default-storageclass=true in profile "addons-220192"
	I0927 00:34:33.415938  559927 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-220192"
	I0927 00:34:33.416335  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.419241  559927 addons.go:69] Setting volumesnapshots=true in profile "addons-220192"
	I0927 00:34:33.419284  559927 addons.go:234] Setting addon volumesnapshots=true in "addons-220192"
	I0927 00:34:33.419325  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:33.419808  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.436466  559927 addons.go:69] Setting gcp-auth=true in profile "addons-220192"
	I0927 00:34:33.436505  559927 mustload.go:65] Loading cluster: addons-220192
	I0927 00:34:33.436716  559927 config.go:182] Loaded profile config "addons-220192": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:34:33.436976  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.439910  559927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:34:33.454508  559927 addons.go:69] Setting ingress=true in profile "addons-220192"
	I0927 00:34:33.454557  559927 addons.go:234] Setting addon ingress=true in "addons-220192"
	I0927 00:34:33.454603  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:33.455134  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.470431  559927 addons.go:69] Setting ingress-dns=true in profile "addons-220192"
	I0927 00:34:33.470469  559927 addons.go:234] Setting addon ingress-dns=true in "addons-220192"
	I0927 00:34:33.470522  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:33.471029  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.480467  559927 addons.go:69] Setting inspektor-gadget=true in profile "addons-220192"
	I0927 00:34:33.480560  559927 addons.go:234] Setting addon inspektor-gadget=true in "addons-220192"
	I0927 00:34:33.480643  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:33.481279  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.501566  559927 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0927 00:34:33.502172  559927 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0927 00:34:33.515339  559927 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0927 00:34:33.515409  559927 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0927 00:34:33.515513  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:33.533114  559927 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 00:34:33.511884  559927 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0927 00:34:33.512482  559927 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I0927 00:34:33.533606  559927 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0927 00:34:33.534258  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:33.539191  559927 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-220192"
	I0927 00:34:33.539240  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:33.539680  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.555238  559927 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 00:34:33.555260  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 00:34:33.555320  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:33.575338  559927 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0927 00:34:33.575507  559927 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0927 00:34:33.579330  559927 out.go:177]   - Using image docker.io/registry:2.8.3
	I0927 00:34:33.579968  559927 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0927 00:34:33.579984  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0927 00:34:33.580043  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:33.589869  559927 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0927 00:34:33.589937  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0927 00:34:33.590044  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:33.592413  559927 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0927 00:34:33.592687  559927 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0927 00:34:33.592703  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0927 00:34:33.592762  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:33.594005  559927 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0927 00:34:33.594022  559927 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0927 00:34:33.594072  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:33.594614  559927 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0927 00:34:33.597708  559927 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0927 00:34:33.599885  559927 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0927 00:34:33.601815  559927 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0927 00:34:33.603187  559927 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0927 00:34:33.604424  559927 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0927 00:34:33.606160  559927 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0927 00:34:33.608900  559927 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0927 00:34:33.612242  559927 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0927 00:34:33.612266  559927 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0927 00:34:33.612345  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	W0927 00:34:33.625523  559927 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0927 00:34:33.636029  559927 addons.go:234] Setting addon default-storageclass=true in "addons-220192"
	I0927 00:34:33.636070  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:33.636475  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.653660  559927 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0927 00:34:33.658697  559927 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0927 00:34:33.662778  559927 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0927 00:34:33.663023  559927 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0927 00:34:33.663038  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0927 00:34:33.663104  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:33.676402  559927 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0927 00:34:33.705602  559927 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0927 00:34:33.705630  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0927 00:34:33.705724  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:33.728629  559927 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0927 00:34:33.732158  559927 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0927 00:34:33.732181  559927 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0927 00:34:33.732260  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:33.761441  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:33.777582  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:33.779733  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:33.781803  559927 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0927 00:34:33.785371  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:33.796375  559927 out.go:177]   - Using image docker.io/busybox:stable
	I0927 00:34:33.796498  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:33.799961  559927 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0927 00:34:33.799986  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0927 00:34:33.800052  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:33.803725  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:33.805040  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:33.827419  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:33.827850  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:33.868201  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:33.878799  559927 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 00:34:33.878821  559927 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 00:34:33.878995  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:33.889070  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:33.894820  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	W0927 00:34:33.897254  559927 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0927 00:34:33.897281  559927 retry.go:31] will retry after 222.514368ms: ssh: handshake failed: EOF
	I0927 00:34:33.899204  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:33.924221  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:34.099923  559927 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0927 00:34:34.099950  559927 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0927 00:34:34.143807  559927 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0927 00:34:34.143833  559927 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0927 00:34:34.150094  559927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0927 00:34:34.152840  559927 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0927 00:34:34.152862  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0927 00:34:34.152949  559927 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0927 00:34:34.152971  559927 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0927 00:34:34.228010  559927 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0927 00:34:34.228039  559927 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0927 00:34:34.241657  559927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0927 00:34:34.253784  559927 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0927 00:34:34.253808  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0927 00:34:34.256601  559927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0927 00:34:34.268169  559927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 00:34:34.271096  559927 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0927 00:34:34.271119  559927 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0927 00:34:34.275626  559927 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0927 00:34:34.275648  559927 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0927 00:34:34.293383  559927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0927 00:34:34.300829  559927 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0927 00:34:34.300856  559927 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0927 00:34:34.322150  559927 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0927 00:34:34.322176  559927 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0927 00:34:34.344962  559927 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0927 00:34:34.344989  559927 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0927 00:34:34.369058  559927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 00:34:34.404038  559927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0927 00:34:34.425344  559927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0927 00:34:34.432017  559927 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0927 00:34:34.432041  559927 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0927 00:34:34.435286  559927 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0927 00:34:34.435320  559927 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0927 00:34:34.435999  559927 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0927 00:34:34.436016  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0927 00:34:34.474152  559927 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0927 00:34:34.474181  559927 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0927 00:34:34.511874  559927 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 00:34:34.511910  559927 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0927 00:34:34.590980  559927 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0927 00:34:34.591007  559927 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0927 00:34:34.594814  559927 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0927 00:34:34.594884  559927 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0927 00:34:34.609412  559927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0927 00:34:34.664262  559927 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0927 00:34:34.664331  559927 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0927 00:34:34.667546  559927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 00:34:34.720328  559927 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0927 00:34:34.720354  559927 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0927 00:34:34.789427  559927 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0927 00:34:34.789454  559927 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0927 00:34:34.797435  559927 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.357437454s)
	I0927 00:34:34.797514  559927 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 00:34:34.797580  559927 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.416422565s)
	I0927 00:34:34.797731  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0927 00:34:34.820770  559927 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 00:34:34.820801  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0927 00:34:34.864725  559927 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0927 00:34:34.864753  559927 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0927 00:34:34.933391  559927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 00:34:34.981553  559927 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0927 00:34:34.981582  559927 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0927 00:34:35.002650  559927 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0927 00:34:35.002677  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0927 00:34:35.126608  559927 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0927 00:34:35.126635  559927 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0927 00:34:35.143210  559927 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0927 00:34:35.143238  559927 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0927 00:34:35.205388  559927 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0927 00:34:35.205414  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0927 00:34:35.215693  559927 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0927 00:34:35.215723  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0927 00:34:35.251131  559927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0927 00:34:35.275630  559927 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0927 00:34:35.275666  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0927 00:34:35.367653  559927 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0927 00:34:35.367680  559927 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0927 00:34:35.496151  559927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0927 00:34:37.834979  559927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.684849473s)
	I0927 00:34:39.467821  559927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.226126912s)
	I0927 00:34:39.467861  559927 addons.go:475] Verifying addon ingress=true in "addons-220192"
	I0927 00:34:39.468074  559927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.211440688s)
	I0927 00:34:39.468139  559927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.199948358s)
	I0927 00:34:39.468192  559927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.174786518s)
	I0927 00:34:39.468376  559927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.099295453s)
	I0927 00:34:39.468473  559927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.064403818s)
	I0927 00:34:39.468511  559927 addons.go:475] Verifying addon registry=true in "addons-220192"
	I0927 00:34:39.468878  559927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.04350358s)
	I0927 00:34:39.468943  559927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.859503354s)
	I0927 00:34:39.469053  559927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.801481487s)
	I0927 00:34:39.469062  559927 addons.go:475] Verifying addon metrics-server=true in "addons-220192"
	I0927 00:34:39.469120  559927 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.671373109s)
	I0927 00:34:39.469132  559927 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0927 00:34:39.469138  559927 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.671602913s)
	I0927 00:34:39.469967  559927 node_ready.go:35] waiting up to 6m0s for node "addons-220192" to be "Ready" ...
	I0927 00:34:39.472151  559927 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-220192 service yakd-dashboard -n yakd-dashboard
	
	I0927 00:34:39.472243  559927 out.go:177] * Verifying ingress addon...
	I0927 00:34:39.472289  559927 out.go:177] * Verifying registry addon...
	I0927 00:34:39.475538  559927 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0927 00:34:39.476423  559927 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0927 00:34:39.494665  559927 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0927 00:34:39.494694  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:39.496798  559927 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0927 00:34:39.496825  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0927 00:34:39.511262  559927 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0927 00:34:39.579923  559927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.328744084s)
	I0927 00:34:39.580128  559927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.646707554s)
	W0927 00:34:39.580156  559927 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0927 00:34:39.580183  559927 retry.go:31] will retry after 283.440734ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0927 00:34:39.831932  559927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.335725047s)
	I0927 00:34:39.831979  559927 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-220192"
	I0927 00:34:39.836412  559927 out.go:177] * Verifying csi-hostpath-driver addon...
	I0927 00:34:39.840109  559927 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0927 00:34:39.846548  559927 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0927 00:34:39.846621  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:39.864697  559927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 00:34:40.005609  559927 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-220192" context rescaled to 1 replicas
	I0927 00:34:40.006033  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:40.013393  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:40.344695  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:40.482976  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:40.484052  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:40.844800  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:40.983568  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:40.985312  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:41.344228  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:41.473653  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:34:41.480232  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:41.481108  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:41.844824  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:41.984071  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:41.984993  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:42.344135  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:42.481608  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:42.482992  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:42.819929  559927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.955146397s)
	I0927 00:34:42.845156  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:42.980034  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:42.980570  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:43.344660  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:43.464416  559927 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0927 00:34:43.464573  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:43.474433  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:34:43.481829  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:43.483496  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:43.483835  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:43.590588  559927 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0927 00:34:43.609201  559927 addons.go:234] Setting addon gcp-auth=true in "addons-220192"
	I0927 00:34:43.609254  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:43.609751  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:43.626431  559927 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0927 00:34:43.626487  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:43.644327  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:43.741116  559927 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0927 00:34:43.743530  559927 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0927 00:34:43.746014  559927 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0927 00:34:43.746031  559927 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0927 00:34:43.763769  559927 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0927 00:34:43.763793  559927 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0927 00:34:43.780969  559927 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0927 00:34:43.780996  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0927 00:34:43.799112  559927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0927 00:34:43.844675  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:43.980511  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:43.982005  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:44.322692  559927 addons.go:475] Verifying addon gcp-auth=true in "addons-220192"
	I0927 00:34:44.325770  559927 out.go:177] * Verifying gcp-auth addon...
	I0927 00:34:44.329465  559927 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0927 00:34:44.333766  559927 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0927 00:34:44.333790  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:44.344656  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:44.479817  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:44.480105  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:44.832869  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:44.844511  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:44.979614  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:44.980284  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:45.332817  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:45.343965  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:45.479741  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:45.481120  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:45.832899  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:45.844116  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:45.973317  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:34:45.979458  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:45.980299  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:46.332489  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:46.343738  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:46.479974  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:46.480735  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:46.833062  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:46.843843  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:46.979508  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:46.980073  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:47.333256  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:47.343452  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:47.479659  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:47.480382  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:47.832663  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:47.843746  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:47.973598  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:34:47.982398  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:47.983191  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:48.333001  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:48.343792  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:48.480415  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:48.480692  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:48.833104  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:48.843760  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:48.979483  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:48.980880  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:49.333641  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:49.344144  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:49.480257  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:49.483517  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:49.833431  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:49.844206  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:49.991059  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:34:49.992115  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:49.992352  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:50.332707  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:50.344159  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:50.480722  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:50.481738  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:50.833298  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:50.843495  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:50.979455  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:50.981405  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:51.334674  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:51.344002  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:51.479106  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:51.480280  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:51.833792  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:51.844086  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:51.982704  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:51.983622  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:52.333240  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:52.343403  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:52.474546  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:34:52.479449  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:52.482139  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:52.832804  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:52.843907  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:52.979328  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:52.980447  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:53.333021  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:53.343677  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:53.479431  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:53.480526  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:53.832723  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:53.843485  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:53.979263  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:53.979973  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:54.333522  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:54.348182  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:54.479005  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:54.480787  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:54.832509  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:54.844676  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:54.974064  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:34:54.979672  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:54.980722  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:55.333594  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:55.343740  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:55.479360  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:55.480245  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:55.832680  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:55.843543  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:55.979952  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:55.980389  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:56.332637  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:56.344144  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:56.479599  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:56.480801  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:56.832314  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:56.843591  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:56.979818  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:56.982964  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:57.333340  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:57.343648  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:57.473718  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:34:57.479686  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:57.480106  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:57.833276  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:57.843837  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:57.980259  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:57.980971  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:58.332941  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:58.344198  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:58.479441  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:58.480562  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:58.832511  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:58.843959  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:58.979304  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:58.979902  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:59.332471  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:59.343688  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:59.473837  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:34:59.480105  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:59.480820  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:59.833342  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:59.844089  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:59.979965  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:59.980877  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:00.334431  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:00.344836  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:00.479625  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:00.481083  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:00.833462  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:00.844379  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:00.979507  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:00.980347  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:01.333369  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:01.344056  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:01.480874  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:01.481106  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:01.833477  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:01.843808  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:01.973440  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:35:01.981517  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:01.981736  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:02.332928  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:02.344231  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:02.479408  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:02.480259  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:02.832727  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:02.843980  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:02.979737  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:02.980467  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:03.332964  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:03.343740  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:03.479543  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:03.480087  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:03.833215  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:03.844240  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:03.974500  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:35:03.980031  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:03.981606  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:04.332668  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:04.343749  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:04.479236  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:04.480360  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:04.833389  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:04.844094  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:04.980186  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:04.980297  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:05.332559  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:05.343815  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:05.479519  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:05.480644  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:05.832634  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:05.843675  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:05.979646  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:05.980528  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:06.332905  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:06.344008  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:06.473815  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:35:06.480097  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:06.480815  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:06.833469  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:06.844027  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:06.979148  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:06.980069  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:07.332568  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:07.343773  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:07.479920  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:07.479969  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:07.833963  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:07.843803  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:07.980212  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:07.980996  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:08.333337  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:08.343626  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:08.479786  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:08.480531  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:08.832973  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:08.844021  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:08.973090  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:35:08.980044  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:08.980573  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:09.332531  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:09.348321  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:09.479485  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:09.479813  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:09.833068  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:09.844031  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:09.979535  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:09.981261  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:10.333874  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:10.354135  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:10.484607  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:10.485964  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:10.832728  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:10.844943  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:10.973698  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:35:10.980277  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:10.980859  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:11.333372  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:11.345921  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:11.479342  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:11.480218  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:11.833074  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:11.844071  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:11.979619  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:11.981229  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:12.333379  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:12.344154  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:12.480895  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:12.481142  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:12.833217  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:12.843423  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:12.979301  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:12.980351  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:13.337392  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:13.343917  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:13.473805  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:35:13.479743  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:13.481489  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:13.832829  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:13.844071  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:13.979477  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:13.980477  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:14.332885  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:14.343685  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:14.479765  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:14.480539  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:14.832829  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:14.843971  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:14.980105  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:14.980578  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:15.332551  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:15.343348  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:15.479922  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:15.480686  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:15.833208  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:15.843933  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:15.973782  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:35:15.979898  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:15.980469  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:16.333214  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:16.344108  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:16.479743  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:16.480603  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:16.833361  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:16.843717  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:16.979315  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:16.980756  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:17.333389  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:17.343864  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:17.480054  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:17.480955  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:17.833334  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:17.843911  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:17.979629  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:17.980181  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:18.332516  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:18.343396  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:18.473097  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:35:18.479374  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:18.479963  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:18.832640  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:18.844049  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:18.996522  559927 node_ready.go:49] node "addons-220192" has status "Ready":"True"
	I0927 00:35:18.996599  559927 node_ready.go:38] duration metric: took 39.526610666s for node "addons-220192" to be "Ready" ...
	I0927 00:35:18.996626  559927 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 00:35:19.019040  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:19.023994  559927 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0927 00:35:19.024068  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:19.032376  559927 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-wnhpd" in "kube-system" namespace to be "Ready" ...
	I0927 00:35:19.398908  559927 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0927 00:35:19.398987  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:19.399566  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:19.483156  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:19.490619  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:19.833611  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:19.852005  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:20.016049  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:20.016250  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:20.347509  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:20.351821  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:20.481433  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:20.482332  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:20.542199  559927 pod_ready.go:93] pod "coredns-7c65d6cfc9-wnhpd" in "kube-system" namespace has status "Ready":"True"
	I0927 00:35:20.542229  559927 pod_ready.go:82] duration metric: took 1.509780007s for pod "coredns-7c65d6cfc9-wnhpd" in "kube-system" namespace to be "Ready" ...
	I0927 00:35:20.542251  559927 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-220192" in "kube-system" namespace to be "Ready" ...
	I0927 00:35:20.548166  559927 pod_ready.go:93] pod "etcd-addons-220192" in "kube-system" namespace has status "Ready":"True"
	I0927 00:35:20.548192  559927 pod_ready.go:82] duration metric: took 5.932914ms for pod "etcd-addons-220192" in "kube-system" namespace to be "Ready" ...
	I0927 00:35:20.548207  559927 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-220192" in "kube-system" namespace to be "Ready" ...
	I0927 00:35:20.553717  559927 pod_ready.go:93] pod "kube-apiserver-addons-220192" in "kube-system" namespace has status "Ready":"True"
	I0927 00:35:20.553741  559927 pod_ready.go:82] duration metric: took 5.524718ms for pod "kube-apiserver-addons-220192" in "kube-system" namespace to be "Ready" ...
	I0927 00:35:20.553754  559927 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-220192" in "kube-system" namespace to be "Ready" ...
	I0927 00:35:20.559029  559927 pod_ready.go:93] pod "kube-controller-manager-addons-220192" in "kube-system" namespace has status "Ready":"True"
	I0927 00:35:20.559057  559927 pod_ready.go:82] duration metric: took 5.294414ms for pod "kube-controller-manager-addons-220192" in "kube-system" namespace to be "Ready" ...
	I0927 00:35:20.559071  559927 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-shqd9" in "kube-system" namespace to be "Ready" ...
	I0927 00:35:20.573997  559927 pod_ready.go:93] pod "kube-proxy-shqd9" in "kube-system" namespace has status "Ready":"True"
	I0927 00:35:20.574023  559927 pod_ready.go:82] duration metric: took 14.944163ms for pod "kube-proxy-shqd9" in "kube-system" namespace to be "Ready" ...
	I0927 00:35:20.574036  559927 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-220192" in "kube-system" namespace to be "Ready" ...
	I0927 00:35:20.833824  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:20.848660  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:20.974442  559927 pod_ready.go:93] pod "kube-scheduler-addons-220192" in "kube-system" namespace has status "Ready":"True"
	I0927 00:35:20.974470  559927 pod_ready.go:82] duration metric: took 400.425942ms for pod "kube-scheduler-addons-220192" in "kube-system" namespace to be "Ready" ...
	I0927 00:35:20.974484  559927 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace to be "Ready" ...
	I0927 00:35:20.982452  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:20.984121  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:21.333221  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:21.345136  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:21.482607  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:21.483622  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:21.833129  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:21.845258  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:21.981612  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:21.982849  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:22.333804  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:22.345228  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:22.481208  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:22.482132  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:22.833026  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:22.845328  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:22.980591  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:22.981225  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:22.984148  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:23.332828  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:23.345437  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:23.480956  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:23.481629  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:23.833324  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:23.845811  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:23.980489  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:23.981126  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:24.334215  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:24.345777  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:24.492856  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:24.501358  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:24.833375  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:24.845765  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:24.984320  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:24.985535  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:25.333030  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:25.346129  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:25.483387  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:25.483462  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:25.491536  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:25.833367  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:25.845582  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:25.986028  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:25.987700  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:26.333088  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:26.347436  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:26.482707  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:26.485635  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:26.835052  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:26.936552  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:26.991369  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:26.993292  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:27.333040  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:27.349818  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:27.490040  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:27.500797  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:27.502364  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:27.833179  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:27.844956  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:27.987680  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:27.989267  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:28.334430  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:28.345015  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:28.482024  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:28.482969  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:28.834146  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:28.845784  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:28.981547  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:28.987897  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:29.332824  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:29.345018  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:29.481343  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:29.483392  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:29.833401  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:29.845939  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:29.983969  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:29.986347  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:29.991317  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:30.333446  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:30.344995  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:30.508060  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:30.509114  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:30.833954  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:30.847331  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:30.983296  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:30.984469  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:31.333529  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:31.346615  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:31.483463  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:31.485699  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:31.834409  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:31.847606  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:31.990264  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:31.991499  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:31.995169  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:32.333938  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:32.345440  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:32.493919  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:32.495619  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:32.838133  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:32.848315  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:33.004360  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:33.006597  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:33.334374  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:33.348157  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:33.487589  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:33.488353  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:33.833623  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:33.845948  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:34.000333  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:34.002102  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:34.006988  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:34.352293  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:34.359508  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:34.502221  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:34.503150  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:34.835304  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:34.865176  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:34.985218  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:34.985823  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:35.334075  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:35.345971  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:35.483800  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:35.491250  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:35.833110  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:35.846328  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:35.979803  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:35.982985  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:36.335407  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:36.345098  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:36.481660  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:36.481954  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:36.483328  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:36.832836  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:36.844919  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:36.982758  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:36.984021  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:37.332859  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:37.344703  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:37.479523  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:37.482358  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:37.833392  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:37.845097  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:37.981768  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:37.982364  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:38.333562  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:38.346750  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:38.538171  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:38.539659  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:38.574486  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:38.833410  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:38.845154  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:38.983941  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:38.986331  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:39.333236  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:39.344860  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:39.487423  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:39.488653  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:39.833699  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:39.845135  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:39.982293  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:39.983320  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:40.334049  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:40.345576  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:40.487727  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:40.489357  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:40.850545  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:40.869817  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:40.988622  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:40.997340  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:40.999067  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:41.333838  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:41.344941  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:41.481094  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:41.482258  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:41.833163  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:41.844771  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:41.983305  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:41.984333  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:42.334272  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:42.345229  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:42.492644  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:42.493566  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:42.832709  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:42.851142  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:42.983002  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:42.987339  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:43.333193  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:43.345053  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:43.483125  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:43.484113  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:43.488641  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:43.833337  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:43.845279  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:43.980602  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:43.984005  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:44.333444  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:44.345218  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:44.481670  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:44.482647  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:44.835774  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:44.845367  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:44.995835  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:44.998309  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:45.333453  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:45.345157  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:45.480354  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:45.484276  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:45.833765  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:45.845022  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:45.982788  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:45.986074  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:45.988189  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:46.333646  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:46.346350  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:46.491046  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:46.492583  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:46.835571  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:46.846801  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:46.981975  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:46.983265  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:47.333111  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:47.345419  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:47.484650  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:47.489278  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:47.832786  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:47.845960  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:47.991387  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:47.992583  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:48.333677  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:48.347026  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:48.492253  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:48.493184  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:48.499877  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:48.833921  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:48.845808  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:48.979562  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:48.982627  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:49.333741  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:49.344581  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:49.480529  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:49.480919  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:49.833732  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:49.845393  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:49.981677  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:49.982936  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:50.333400  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:50.346044  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:50.480790  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:50.483023  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:50.833421  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:50.849074  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:50.981931  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:50.989634  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:50.995853  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:51.334696  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:51.348991  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:51.491426  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:51.492330  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:51.833618  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:51.844626  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:51.984195  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:51.985302  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:52.334919  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:52.344890  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:52.483430  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:52.484577  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:52.833804  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:52.845966  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:52.980535  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:52.981657  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:53.333493  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:53.345580  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:53.481301  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:53.482899  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:53.483553  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:53.833110  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:53.845938  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:53.996740  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:53.998174  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:54.334265  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:54.345544  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:54.488077  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:54.489088  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:54.833856  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:54.846893  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:54.982313  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:54.984449  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:55.333590  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:55.345439  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:55.481901  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:55.483756  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:55.484959  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:55.833795  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:55.846912  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:55.985194  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:55.986869  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:56.332981  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:56.345961  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:56.484347  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:56.485464  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:56.834149  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:56.849037  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:56.982925  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:56.986831  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:57.333287  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:57.344956  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:57.481955  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:57.492325  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:57.493676  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:57.833426  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:57.844766  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:57.982873  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:57.984241  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:58.334364  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:58.346131  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:58.492147  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:58.492947  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:58.834054  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:58.853019  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:58.991069  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:58.992535  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:59.333737  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:59.346124  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:59.495213  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:59.495807  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:59.496471  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:59.833938  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:59.845169  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:59.983223  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:59.984276  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:00.333940  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:00.345113  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:00.481959  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:00.482968  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:36:00.834016  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:00.845460  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:00.984100  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:36:00.985224  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:01.332734  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:01.344581  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:01.486492  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:36:01.487076  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:01.833007  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:01.844703  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:01.981792  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:01.982901  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:01.983764  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:36:02.334761  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:02.345539  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:02.487447  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:02.491829  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:36:02.834434  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:02.845976  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:02.984185  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:36:02.987921  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:03.334009  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:03.345775  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:03.481785  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:03.481999  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:36:03.834559  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:03.846410  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:03.982047  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:03.983140  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:36:03.986356  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:04.334016  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:04.345507  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:04.482381  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:36:04.483345  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:04.833296  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:04.845032  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:04.983083  559927 kapi.go:107] duration metric: took 1m25.506656031s to wait for kubernetes.io/minikube-addons=registry ...
	I0927 00:36:04.983755  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:05.334049  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:05.345555  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:05.480336  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:05.833772  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:05.845009  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:05.982793  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:06.334193  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:06.346939  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:06.482860  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:06.484360  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:06.833274  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:06.844879  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:06.982428  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:07.332952  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:07.347731  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:07.482480  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:07.833289  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:07.844648  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:07.980267  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:08.333858  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:08.345865  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:08.481076  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:08.483138  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:08.835184  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:08.845444  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:08.987050  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:09.334706  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:09.348925  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:09.482708  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:09.834286  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:09.845038  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:09.986190  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:10.333090  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:10.344775  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:10.480737  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:10.833646  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:10.846522  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:10.982188  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:10.982779  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:11.333798  559927 kapi.go:107] duration metric: took 1m27.004325034s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0927 00:36:11.335762  559927 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-220192 cluster.
	I0927 00:36:11.337808  559927 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0927 00:36:11.339463  559927 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0927 00:36:11.344308  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:11.480962  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:11.846998  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:11.989166  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:12.345349  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:12.483345  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:12.845611  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:12.985783  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:12.987818  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:13.345705  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:13.483215  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:13.844991  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:13.984190  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:14.345761  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:14.483266  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:14.848904  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:14.983719  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:15.344480  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:15.486603  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:15.492777  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:15.846650  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:15.979870  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:16.345708  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:16.480932  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:16.845136  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:16.982088  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:17.345624  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:17.482482  559927 kapi.go:107] duration metric: took 1m38.006940645s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0927 00:36:17.844816  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:17.984704  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:18.345226  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:18.845178  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:19.349482  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:19.846085  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:20.349935  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:20.481081  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:20.845700  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:21.345969  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:21.844863  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:22.345753  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:22.845200  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:22.981147  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:23.346423  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:23.845463  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:24.345795  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:24.845049  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:25.345257  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:25.484602  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:25.846829  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:26.347013  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:26.845138  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:27.345506  559927 kapi.go:107] duration metric: took 1m47.50539711s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0927 00:36:27.348020  559927 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, storage-provisioner, ingress-dns, metrics-server, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0927 00:36:27.351337  559927 addons.go:510] duration metric: took 1m53.969914524s for enable addons: enabled=[cloud-spanner nvidia-device-plugin storage-provisioner ingress-dns metrics-server yakd default-storageclass inspektor-gadget volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0927 00:36:27.980368  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:29.982001  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:32.481885  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:34.980951  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:36.981764  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:39.480929  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:39.981626  559927 pod_ready.go:93] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"True"
	I0927 00:36:39.981655  559927 pod_ready.go:82] duration metric: took 1m19.007136304s for pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace to be "Ready" ...
	I0927 00:36:39.981668  559927 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-dqrvw" in "kube-system" namespace to be "Ready" ...
	I0927 00:36:39.986994  559927 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-dqrvw" in "kube-system" namespace has status "Ready":"True"
	I0927 00:36:39.987021  559927 pod_ready.go:82] duration metric: took 5.342068ms for pod "nvidia-device-plugin-daemonset-dqrvw" in "kube-system" namespace to be "Ready" ...
	I0927 00:36:39.987044  559927 pod_ready.go:39] duration metric: took 1m20.990388006s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 00:36:39.987060  559927 api_server.go:52] waiting for apiserver process to appear ...
	I0927 00:36:39.987091  559927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 00:36:39.987152  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 00:36:40.044709  559927 cri.go:89] found id: "04b9c719c715f318e0da018097c22f147000bd0fb64d781731fa9eb3b3c51395"
	I0927 00:36:40.044730  559927 cri.go:89] found id: ""
	I0927 00:36:40.044737  559927 logs.go:276] 1 containers: [04b9c719c715f318e0da018097c22f147000bd0fb64d781731fa9eb3b3c51395]
	I0927 00:36:40.044793  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:36:40.049159  559927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 00:36:40.049232  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 00:36:40.092137  559927 cri.go:89] found id: "6b36b1e46732bafa997c4e66766a4bb0cd5ea7487006b7a6ba9e5860f1743a6e"
	I0927 00:36:40.092160  559927 cri.go:89] found id: ""
	I0927 00:36:40.092168  559927 logs.go:276] 1 containers: [6b36b1e46732bafa997c4e66766a4bb0cd5ea7487006b7a6ba9e5860f1743a6e]
	I0927 00:36:40.092226  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:36:40.095880  559927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 00:36:40.095952  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 00:36:40.136619  559927 cri.go:89] found id: "1a8d7c13a871933275d3e84e87c063e55c9ed4adff23be36d5ea4bfa8accbcd6"
	I0927 00:36:40.136643  559927 cri.go:89] found id: ""
	I0927 00:36:40.136651  559927 logs.go:276] 1 containers: [1a8d7c13a871933275d3e84e87c063e55c9ed4adff23be36d5ea4bfa8accbcd6]
	I0927 00:36:40.136728  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:36:40.140255  559927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 00:36:40.140338  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 00:36:40.191576  559927 cri.go:89] found id: "555dc55ff545e165f45bde68c31f0843d0f21041ba3fea37def560aea920dcc5"
	I0927 00:36:40.191596  559927 cri.go:89] found id: ""
	I0927 00:36:40.191603  559927 logs.go:276] 1 containers: [555dc55ff545e165f45bde68c31f0843d0f21041ba3fea37def560aea920dcc5]
	I0927 00:36:40.191664  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:36:40.195147  559927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 00:36:40.195228  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 00:36:40.232473  559927 cri.go:89] found id: "5e3fe54c99e931cc6b0b654e967a2638c30374abdabe2c1174d5f6a3fff11315"
	I0927 00:36:40.232496  559927 cri.go:89] found id: ""
	I0927 00:36:40.232504  559927 logs.go:276] 1 containers: [5e3fe54c99e931cc6b0b654e967a2638c30374abdabe2c1174d5f6a3fff11315]
	I0927 00:36:40.232560  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:36:40.236094  559927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 00:36:40.236166  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 00:36:40.273140  559927 cri.go:89] found id: "2bfc8d78fdf58256c4a5925537af21cdbf3dbd66127f8a15b8101f92fb8a78c9"
	I0927 00:36:40.273163  559927 cri.go:89] found id: ""
	I0927 00:36:40.273170  559927 logs.go:276] 1 containers: [2bfc8d78fdf58256c4a5925537af21cdbf3dbd66127f8a15b8101f92fb8a78c9]
	I0927 00:36:40.273258  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:36:40.276617  559927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 00:36:40.276695  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 00:36:40.313852  559927 cri.go:89] found id: "d7a7261efecf3162ccc2d26ed432451c900af8b4d1487407d7ce2be5094281b5"
	I0927 00:36:40.313876  559927 cri.go:89] found id: ""
	I0927 00:36:40.313885  559927 logs.go:276] 1 containers: [d7a7261efecf3162ccc2d26ed432451c900af8b4d1487407d7ce2be5094281b5]
	I0927 00:36:40.313941  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:36:40.317368  559927 logs.go:123] Gathering logs for kubelet ...
	I0927 00:36:40.317391  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 00:36:40.354686  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:35 addons-220192 kubelet[1511]: W0927 00:34:35.883351    1511 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-220192" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object
	W0927 00:36:40.354935  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:35 addons-220192 kubelet[1511]: E0927 00:34:35.883402    1511 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-220192\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-220192' and this object" logger="UnhandledError"
	W0927 00:36:40.355126  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:35 addons-220192 kubelet[1511]: W0927 00:34:35.916164    1511 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-220192" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object
	W0927 00:36:40.355357  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:35 addons-220192 kubelet[1511]: E0927 00:34:35.916217    1511 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-220192\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-220192' and this object" logger="UnhandledError"
	W0927 00:36:40.356718  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.200959    1511 projected.go:194] Error preparing data for projected volume kube-api-access-8sq56 for pod kube-system/kindnet-4rr4t: [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0927 00:36:40.357232  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.201058    1511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/afd40f83-7a79-4edc-bbfc-ff6936a3158e-kube-api-access-8sq56 podName:afd40f83-7a79-4edc-bbfc-ff6936a3158e nodeName:}" failed. No retries permitted until 2024-09-27 00:34:37.70103236 +0000 UTC m=+8.728897654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8sq56" (UniqueName: "kubernetes.io/projected/afd40f83-7a79-4edc-bbfc-ff6936a3158e-kube-api-access-8sq56") pod "kindnet-4rr4t" (UID: "afd40f83-7a79-4edc-bbfc-ff6936a3158e") : [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condi
tion]
	W0927 00:36:40.357591  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.420880    1511 projected.go:194] Error preparing data for projected volume kube-api-access-pfjql for pod kube-system/kube-proxy-shqd9: [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0927 00:36:40.358101  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.420948    1511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/476cb0de-772b-4e25-ac8c-7244a6d392e7-kube-api-access-pfjql podName:476cb0de-772b-4e25-ac8c-7244a6d392e7 nodeName:}" failed. No retries permitted until 2024-09-27 00:34:37.920927906 +0000 UTC m=+8.948793201 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pfjql" (UniqueName: "kubernetes.io/projected/476cb0de-772b-4e25-ac8c-7244a6d392e7-kube-api-access-pfjql") pod "kube-proxy-shqd9" (UID: "476cb0de-772b-4e25-ac8c-7244a6d392e7") : [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for th
e condition]
	I0927 00:36:40.415196  559927 logs.go:123] Gathering logs for kube-controller-manager [2bfc8d78fdf58256c4a5925537af21cdbf3dbd66127f8a15b8101f92fb8a78c9] ...
	I0927 00:36:40.415235  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2bfc8d78fdf58256c4a5925537af21cdbf3dbd66127f8a15b8101f92fb8a78c9"
	I0927 00:36:40.520289  559927 logs.go:123] Gathering logs for kindnet [d7a7261efecf3162ccc2d26ed432451c900af8b4d1487407d7ce2be5094281b5] ...
	I0927 00:36:40.520324  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7a7261efecf3162ccc2d26ed432451c900af8b4d1487407d7ce2be5094281b5"
	I0927 00:36:40.569490  559927 logs.go:123] Gathering logs for kube-scheduler [555dc55ff545e165f45bde68c31f0843d0f21041ba3fea37def560aea920dcc5] ...
	I0927 00:36:40.569523  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 555dc55ff545e165f45bde68c31f0843d0f21041ba3fea37def560aea920dcc5"
	I0927 00:36:40.620143  559927 logs.go:123] Gathering logs for kube-proxy [5e3fe54c99e931cc6b0b654e967a2638c30374abdabe2c1174d5f6a3fff11315] ...
	I0927 00:36:40.620183  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e3fe54c99e931cc6b0b654e967a2638c30374abdabe2c1174d5f6a3fff11315"
	I0927 00:36:40.663881  559927 logs.go:123] Gathering logs for CRI-O ...
	I0927 00:36:40.663911  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 00:36:40.763619  559927 logs.go:123] Gathering logs for dmesg ...
	I0927 00:36:40.763658  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 00:36:40.779898  559927 logs.go:123] Gathering logs for describe nodes ...
	I0927 00:36:40.779926  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 00:36:40.969685  559927 logs.go:123] Gathering logs for kube-apiserver [04b9c719c715f318e0da018097c22f147000bd0fb64d781731fa9eb3b3c51395] ...
	I0927 00:36:40.969715  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04b9c719c715f318e0da018097c22f147000bd0fb64d781731fa9eb3b3c51395"
	I0927 00:36:41.024968  559927 logs.go:123] Gathering logs for etcd [6b36b1e46732bafa997c4e66766a4bb0cd5ea7487006b7a6ba9e5860f1743a6e] ...
	I0927 00:36:41.025001  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b36b1e46732bafa997c4e66766a4bb0cd5ea7487006b7a6ba9e5860f1743a6e"
	I0927 00:36:41.081642  559927 logs.go:123] Gathering logs for coredns [1a8d7c13a871933275d3e84e87c063e55c9ed4adff23be36d5ea4bfa8accbcd6] ...
	I0927 00:36:41.081676  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a8d7c13a871933275d3e84e87c063e55c9ed4adff23be36d5ea4bfa8accbcd6"
	I0927 00:36:41.120059  559927 logs.go:123] Gathering logs for container status ...
	I0927 00:36:41.120093  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 00:36:41.178658  559927 out.go:358] Setting ErrFile to fd 2...
	I0927 00:36:41.178684  559927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 00:36:41.178749  559927 out.go:270] X Problems detected in kubelet:
	W0927 00:36:41.178763  559927 out.go:270]   Sep 27 00:34:35 addons-220192 kubelet[1511]: E0927 00:34:35.916217    1511 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-220192\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-220192' and this object" logger="UnhandledError"
	W0927 00:36:41.178772  559927 out.go:270]   Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.200959    1511 projected.go:194] Error preparing data for projected volume kube-api-access-8sq56 for pod kube-system/kindnet-4rr4t: [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0927 00:36:41.178787  559927 out.go:270]   Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.201058    1511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/afd40f83-7a79-4edc-bbfc-ff6936a3158e-kube-api-access-8sq56 podName:afd40f83-7a79-4edc-bbfc-ff6936a3158e nodeName:}" failed. No retries permitted until 2024-09-27 00:34:37.70103236 +0000 UTC m=+8.728897654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8sq56" (UniqueName: "kubernetes.io/projected/afd40f83-7a79-4edc-bbfc-ff6936a3158e-kube-api-access-8sq56") pod "kindnet-4rr4t" (UID: "afd40f83-7a79-4edc-bbfc-ff6936a3158e") : [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0927 00:36:41.178794  559927 out.go:270]   Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.420880    1511 projected.go:194] Error preparing data for projected volume kube-api-access-pfjql for pod kube-system/kube-proxy-shqd9: [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0927 00:36:41.178804  559927 out.go:270]   Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.420948    1511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/476cb0de-772b-4e25-ac8c-7244a6d392e7-kube-api-access-pfjql podName:476cb0de-772b-4e25-ac8c-7244a6d392e7 nodeName:}" failed. No retries permitted until 2024-09-27 00:34:37.920927906 +0000 UTC m=+8.948793201 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pfjql" (UniqueName: "kubernetes.io/projected/476cb0de-772b-4e25-ac8c-7244a6d392e7-kube-api-access-pfjql") pod "kube-proxy-shqd9" (UID: "476cb0de-772b-4e25-ac8c-7244a6d392e7") : [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	I0927 00:36:41.178810  559927 out.go:358] Setting ErrFile to fd 2...
	I0927 00:36:41.178816  559927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:36:51.180508  559927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 00:36:51.193914  559927 api_server.go:72] duration metric: took 2m17.812908825s to wait for apiserver process to appear ...
	I0927 00:36:51.193938  559927 api_server.go:88] waiting for apiserver healthz status ...
	I0927 00:36:51.193970  559927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 00:36:51.194024  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 00:36:51.258037  559927 cri.go:89] found id: "04b9c719c715f318e0da018097c22f147000bd0fb64d781731fa9eb3b3c51395"
	I0927 00:36:51.258058  559927 cri.go:89] found id: ""
	I0927 00:36:51.258066  559927 logs.go:276] 1 containers: [04b9c719c715f318e0da018097c22f147000bd0fb64d781731fa9eb3b3c51395]
	I0927 00:36:51.258120  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:36:51.261573  559927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 00:36:51.261654  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 00:36:51.300961  559927 cri.go:89] found id: "6b36b1e46732bafa997c4e66766a4bb0cd5ea7487006b7a6ba9e5860f1743a6e"
	I0927 00:36:51.300984  559927 cri.go:89] found id: ""
	I0927 00:36:51.300993  559927 logs.go:276] 1 containers: [6b36b1e46732bafa997c4e66766a4bb0cd5ea7487006b7a6ba9e5860f1743a6e]
	I0927 00:36:51.301047  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:36:51.304390  559927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 00:36:51.304462  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 00:36:51.344486  559927 cri.go:89] found id: "1a8d7c13a871933275d3e84e87c063e55c9ed4adff23be36d5ea4bfa8accbcd6"
	I0927 00:36:51.344509  559927 cri.go:89] found id: ""
	I0927 00:36:51.344517  559927 logs.go:276] 1 containers: [1a8d7c13a871933275d3e84e87c063e55c9ed4adff23be36d5ea4bfa8accbcd6]
	I0927 00:36:51.344572  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:36:51.348065  559927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 00:36:51.348139  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 00:36:51.384964  559927 cri.go:89] found id: "555dc55ff545e165f45bde68c31f0843d0f21041ba3fea37def560aea920dcc5"
	I0927 00:36:51.384988  559927 cri.go:89] found id: ""
	I0927 00:36:51.384996  559927 logs.go:276] 1 containers: [555dc55ff545e165f45bde68c31f0843d0f21041ba3fea37def560aea920dcc5]
	I0927 00:36:51.385080  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:36:51.388530  559927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 00:36:51.388601  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 00:36:51.426096  559927 cri.go:89] found id: "5e3fe54c99e931cc6b0b654e967a2638c30374abdabe2c1174d5f6a3fff11315"
	I0927 00:36:51.426119  559927 cri.go:89] found id: ""
	I0927 00:36:51.426127  559927 logs.go:276] 1 containers: [5e3fe54c99e931cc6b0b654e967a2638c30374abdabe2c1174d5f6a3fff11315]
	I0927 00:36:51.426183  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:36:51.429629  559927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 00:36:51.429716  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 00:36:51.466515  559927 cri.go:89] found id: "2bfc8d78fdf58256c4a5925537af21cdbf3dbd66127f8a15b8101f92fb8a78c9"
	I0927 00:36:51.466536  559927 cri.go:89] found id: ""
	I0927 00:36:51.466544  559927 logs.go:276] 1 containers: [2bfc8d78fdf58256c4a5925537af21cdbf3dbd66127f8a15b8101f92fb8a78c9]
	I0927 00:36:51.466604  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:36:51.470090  559927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 00:36:51.470164  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 00:36:51.509078  559927 cri.go:89] found id: "d7a7261efecf3162ccc2d26ed432451c900af8b4d1487407d7ce2be5094281b5"
	I0927 00:36:51.509100  559927 cri.go:89] found id: ""
	I0927 00:36:51.509107  559927 logs.go:276] 1 containers: [d7a7261efecf3162ccc2d26ed432451c900af8b4d1487407d7ce2be5094281b5]
	I0927 00:36:51.509161  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:36:51.512599  559927 logs.go:123] Gathering logs for kube-controller-manager [2bfc8d78fdf58256c4a5925537af21cdbf3dbd66127f8a15b8101f92fb8a78c9] ...
	I0927 00:36:51.512667  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2bfc8d78fdf58256c4a5925537af21cdbf3dbd66127f8a15b8101f92fb8a78c9"
	I0927 00:36:51.606345  559927 logs.go:123] Gathering logs for kindnet [d7a7261efecf3162ccc2d26ed432451c900af8b4d1487407d7ce2be5094281b5] ...
	I0927 00:36:51.606381  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7a7261efecf3162ccc2d26ed432451c900af8b4d1487407d7ce2be5094281b5"
	I0927 00:36:51.648842  559927 logs.go:123] Gathering logs for CRI-O ...
	I0927 00:36:51.648870  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 00:36:51.751992  559927 logs.go:123] Gathering logs for container status ...
	I0927 00:36:51.752031  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 00:36:51.802535  559927 logs.go:123] Gathering logs for kubelet ...
	I0927 00:36:51.802567  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 00:36:51.843443  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:35 addons-220192 kubelet[1511]: W0927 00:34:35.883351    1511 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-220192" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object
	W0927 00:36:51.843686  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:35 addons-220192 kubelet[1511]: E0927 00:34:35.883402    1511 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-220192\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-220192' and this object" logger="UnhandledError"
	W0927 00:36:51.843879  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:35 addons-220192 kubelet[1511]: W0927 00:34:35.916164    1511 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-220192" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object
	W0927 00:36:51.844104  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:35 addons-220192 kubelet[1511]: E0927 00:34:35.916217    1511 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-220192\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-220192' and this object" logger="UnhandledError"
	W0927 00:36:51.845476  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.200959    1511 projected.go:194] Error preparing data for projected volume kube-api-access-8sq56 for pod kube-system/kindnet-4rr4t: [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0927 00:36:51.845988  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.201058    1511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/afd40f83-7a79-4edc-bbfc-ff6936a3158e-kube-api-access-8sq56 podName:afd40f83-7a79-4edc-bbfc-ff6936a3158e nodeName:}" failed. No retries permitted until 2024-09-27 00:34:37.70103236 +0000 UTC m=+8.728897654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8sq56" (UniqueName: "kubernetes.io/projected/afd40f83-7a79-4edc-bbfc-ff6936a3158e-kube-api-access-8sq56") pod "kindnet-4rr4t" (UID: "afd40f83-7a79-4edc-bbfc-ff6936a3158e") : [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condi
tion]
	W0927 00:36:51.846347  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.420880    1511 projected.go:194] Error preparing data for projected volume kube-api-access-pfjql for pod kube-system/kube-proxy-shqd9: [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0927 00:36:51.846856  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.420948    1511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/476cb0de-772b-4e25-ac8c-7244a6d392e7-kube-api-access-pfjql podName:476cb0de-772b-4e25-ac8c-7244a6d392e7 nodeName:}" failed. No retries permitted until 2024-09-27 00:34:37.920927906 +0000 UTC m=+8.948793201 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pfjql" (UniqueName: "kubernetes.io/projected/476cb0de-772b-4e25-ac8c-7244a6d392e7-kube-api-access-pfjql") pod "kube-proxy-shqd9" (UID: "476cb0de-772b-4e25-ac8c-7244a6d392e7") : [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for th
e condition]
	I0927 00:36:51.904915  559927 logs.go:123] Gathering logs for dmesg ...
	I0927 00:36:51.904950  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 00:36:51.921815  559927 logs.go:123] Gathering logs for etcd [6b36b1e46732bafa997c4e66766a4bb0cd5ea7487006b7a6ba9e5860f1743a6e] ...
	I0927 00:36:51.921883  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b36b1e46732bafa997c4e66766a4bb0cd5ea7487006b7a6ba9e5860f1743a6e"
	I0927 00:36:51.982538  559927 logs.go:123] Gathering logs for coredns [1a8d7c13a871933275d3e84e87c063e55c9ed4adff23be36d5ea4bfa8accbcd6] ...
	I0927 00:36:51.982627  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a8d7c13a871933275d3e84e87c063e55c9ed4adff23be36d5ea4bfa8accbcd6"
	I0927 00:36:52.028370  559927 logs.go:123] Gathering logs for describe nodes ...
	I0927 00:36:52.028401  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 00:36:52.168300  559927 logs.go:123] Gathering logs for kube-apiserver [04b9c719c715f318e0da018097c22f147000bd0fb64d781731fa9eb3b3c51395] ...
	I0927 00:36:52.168332  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04b9c719c715f318e0da018097c22f147000bd0fb64d781731fa9eb3b3c51395"
	I0927 00:36:52.232001  559927 logs.go:123] Gathering logs for kube-scheduler [555dc55ff545e165f45bde68c31f0843d0f21041ba3fea37def560aea920dcc5] ...
	I0927 00:36:52.232037  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 555dc55ff545e165f45bde68c31f0843d0f21041ba3fea37def560aea920dcc5"
	I0927 00:36:52.282225  559927 logs.go:123] Gathering logs for kube-proxy [5e3fe54c99e931cc6b0b654e967a2638c30374abdabe2c1174d5f6a3fff11315] ...
	I0927 00:36:52.282254  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e3fe54c99e931cc6b0b654e967a2638c30374abdabe2c1174d5f6a3fff11315"
	I0927 00:36:52.325692  559927 out.go:358] Setting ErrFile to fd 2...
	I0927 00:36:52.325717  559927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 00:36:52.325772  559927 out.go:270] X Problems detected in kubelet:
	W0927 00:36:52.325789  559927 out.go:270]   Sep 27 00:34:35 addons-220192 kubelet[1511]: E0927 00:34:35.916217    1511 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-220192\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-220192' and this object" logger="UnhandledError"
	W0927 00:36:52.325804  559927 out.go:270]   Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.200959    1511 projected.go:194] Error preparing data for projected volume kube-api-access-8sq56 for pod kube-system/kindnet-4rr4t: [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0927 00:36:52.325811  559927 out.go:270]   Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.201058    1511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/afd40f83-7a79-4edc-bbfc-ff6936a3158e-kube-api-access-8sq56 podName:afd40f83-7a79-4edc-bbfc-ff6936a3158e nodeName:}" failed. No retries permitted until 2024-09-27 00:34:37.70103236 +0000 UTC m=+8.728897654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8sq56" (UniqueName: "kubernetes.io/projected/afd40f83-7a79-4edc-bbfc-ff6936a3158e-kube-api-access-8sq56") pod "kindnet-4rr4t" (UID: "afd40f83-7a79-4edc-bbfc-ff6936a3158e") : [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0927 00:36:52.325824  559927 out.go:270]   Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.420880    1511 projected.go:194] Error preparing data for projected volume kube-api-access-pfjql for pod kube-system/kube-proxy-shqd9: [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0927 00:36:52.325830  559927 out.go:270]   Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.420948    1511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/476cb0de-772b-4e25-ac8c-7244a6d392e7-kube-api-access-pfjql podName:476cb0de-772b-4e25-ac8c-7244a6d392e7 nodeName:}" failed. No retries permitted until 2024-09-27 00:34:37.920927906 +0000 UTC m=+8.948793201 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pfjql" (UniqueName: "kubernetes.io/projected/476cb0de-772b-4e25-ac8c-7244a6d392e7-kube-api-access-pfjql") pod "kube-proxy-shqd9" (UID: "476cb0de-772b-4e25-ac8c-7244a6d392e7") : [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	I0927 00:36:52.325836  559927 out.go:358] Setting ErrFile to fd 2...
	I0927 00:36:52.325846  559927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:37:02.327724  559927 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0927 00:37:02.335228  559927 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0927 00:37:02.336164  559927 api_server.go:141] control plane version: v1.31.1
	I0927 00:37:02.336197  559927 api_server.go:131] duration metric: took 11.142248149s to wait for apiserver health ...
	I0927 00:37:02.336207  559927 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 00:37:02.336227  559927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 00:37:02.336293  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 00:37:02.373662  559927 cri.go:89] found id: "04b9c719c715f318e0da018097c22f147000bd0fb64d781731fa9eb3b3c51395"
	I0927 00:37:02.373688  559927 cri.go:89] found id: ""
	I0927 00:37:02.373696  559927 logs.go:276] 1 containers: [04b9c719c715f318e0da018097c22f147000bd0fb64d781731fa9eb3b3c51395]
	I0927 00:37:02.373750  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:37:02.377092  559927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 00:37:02.377160  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 00:37:02.414236  559927 cri.go:89] found id: "6b36b1e46732bafa997c4e66766a4bb0cd5ea7487006b7a6ba9e5860f1743a6e"
	I0927 00:37:02.414265  559927 cri.go:89] found id: ""
	I0927 00:37:02.414279  559927 logs.go:276] 1 containers: [6b36b1e46732bafa997c4e66766a4bb0cd5ea7487006b7a6ba9e5860f1743a6e]
	I0927 00:37:02.414335  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:37:02.417663  559927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 00:37:02.417741  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 00:37:02.468306  559927 cri.go:89] found id: "1a8d7c13a871933275d3e84e87c063e55c9ed4adff23be36d5ea4bfa8accbcd6"
	I0927 00:37:02.468327  559927 cri.go:89] found id: ""
	I0927 00:37:02.468335  559927 logs.go:276] 1 containers: [1a8d7c13a871933275d3e84e87c063e55c9ed4adff23be36d5ea4bfa8accbcd6]
	I0927 00:37:02.468389  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:37:02.471964  559927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 00:37:02.472034  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 00:37:02.512245  559927 cri.go:89] found id: "555dc55ff545e165f45bde68c31f0843d0f21041ba3fea37def560aea920dcc5"
	I0927 00:37:02.512267  559927 cri.go:89] found id: ""
	I0927 00:37:02.512275  559927 logs.go:276] 1 containers: [555dc55ff545e165f45bde68c31f0843d0f21041ba3fea37def560aea920dcc5]
	I0927 00:37:02.512330  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:37:02.515876  559927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 00:37:02.515968  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 00:37:02.552023  559927 cri.go:89] found id: "5e3fe54c99e931cc6b0b654e967a2638c30374abdabe2c1174d5f6a3fff11315"
	I0927 00:37:02.552047  559927 cri.go:89] found id: ""
	I0927 00:37:02.552055  559927 logs.go:276] 1 containers: [5e3fe54c99e931cc6b0b654e967a2638c30374abdabe2c1174d5f6a3fff11315]
	I0927 00:37:02.552110  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:37:02.555592  559927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 00:37:02.555670  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 00:37:02.601327  559927 cri.go:89] found id: "2bfc8d78fdf58256c4a5925537af21cdbf3dbd66127f8a15b8101f92fb8a78c9"
	I0927 00:37:02.601351  559927 cri.go:89] found id: ""
	I0927 00:37:02.601359  559927 logs.go:276] 1 containers: [2bfc8d78fdf58256c4a5925537af21cdbf3dbd66127f8a15b8101f92fb8a78c9]
	I0927 00:37:02.601447  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:37:02.604953  559927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 00:37:02.605044  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 00:37:02.642635  559927 cri.go:89] found id: "d7a7261efecf3162ccc2d26ed432451c900af8b4d1487407d7ce2be5094281b5"
	I0927 00:37:02.642660  559927 cri.go:89] found id: ""
	I0927 00:37:02.642668  559927 logs.go:276] 1 containers: [d7a7261efecf3162ccc2d26ed432451c900af8b4d1487407d7ce2be5094281b5]
	I0927 00:37:02.642789  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:37:02.646380  559927 logs.go:123] Gathering logs for kube-controller-manager [2bfc8d78fdf58256c4a5925537af21cdbf3dbd66127f8a15b8101f92fb8a78c9] ...
	I0927 00:37:02.646406  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2bfc8d78fdf58256c4a5925537af21cdbf3dbd66127f8a15b8101f92fb8a78c9"
	I0927 00:37:02.718917  559927 logs.go:123] Gathering logs for kindnet [d7a7261efecf3162ccc2d26ed432451c900af8b4d1487407d7ce2be5094281b5] ...
	I0927 00:37:02.718956  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7a7261efecf3162ccc2d26ed432451c900af8b4d1487407d7ce2be5094281b5"
	I0927 00:37:02.761541  559927 logs.go:123] Gathering logs for container status ...
	I0927 00:37:02.761572  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 00:37:02.809548  559927 logs.go:123] Gathering logs for kubelet ...
	I0927 00:37:02.809580  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 00:37:02.853630  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:35 addons-220192 kubelet[1511]: W0927 00:34:35.883351    1511 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-220192" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object
	W0927 00:37:02.853910  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:35 addons-220192 kubelet[1511]: E0927 00:34:35.883402    1511 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-220192\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-220192' and this object" logger="UnhandledError"
	W0927 00:37:02.854104  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:35 addons-220192 kubelet[1511]: W0927 00:34:35.916164    1511 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-220192" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object
	W0927 00:37:02.854331  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:35 addons-220192 kubelet[1511]: E0927 00:34:35.916217    1511 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-220192\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-220192' and this object" logger="UnhandledError"
	W0927 00:37:02.855706  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.200959    1511 projected.go:194] Error preparing data for projected volume kube-api-access-8sq56 for pod kube-system/kindnet-4rr4t: [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0927 00:37:02.856214  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.201058    1511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/afd40f83-7a79-4edc-bbfc-ff6936a3158e-kube-api-access-8sq56 podName:afd40f83-7a79-4edc-bbfc-ff6936a3158e nodeName:}" failed. No retries permitted until 2024-09-27 00:34:37.70103236 +0000 UTC m=+8.728897654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8sq56" (UniqueName: "kubernetes.io/projected/afd40f83-7a79-4edc-bbfc-ff6936a3158e-kube-api-access-8sq56") pod "kindnet-4rr4t" (UID: "afd40f83-7a79-4edc-bbfc-ff6936a3158e") : [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condi
tion]
	W0927 00:37:02.856573  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.420880    1511 projected.go:194] Error preparing data for projected volume kube-api-access-pfjql for pod kube-system/kube-proxy-shqd9: [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0927 00:37:02.857089  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.420948    1511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/476cb0de-772b-4e25-ac8c-7244a6d392e7-kube-api-access-pfjql podName:476cb0de-772b-4e25-ac8c-7244a6d392e7 nodeName:}" failed. No retries permitted until 2024-09-27 00:34:37.920927906 +0000 UTC m=+8.948793201 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pfjql" (UniqueName: "kubernetes.io/projected/476cb0de-772b-4e25-ac8c-7244a6d392e7-kube-api-access-pfjql") pod "kube-proxy-shqd9" (UID: "476cb0de-772b-4e25-ac8c-7244a6d392e7") : [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for th
e condition]
	I0927 00:37:02.916418  559927 logs.go:123] Gathering logs for dmesg ...
	I0927 00:37:02.916455  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 00:37:02.932480  559927 logs.go:123] Gathering logs for kube-apiserver [04b9c719c715f318e0da018097c22f147000bd0fb64d781731fa9eb3b3c51395] ...
	I0927 00:37:02.932508  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04b9c719c715f318e0da018097c22f147000bd0fb64d781731fa9eb3b3c51395"
	I0927 00:37:03.002890  559927 logs.go:123] Gathering logs for kube-scheduler [555dc55ff545e165f45bde68c31f0843d0f21041ba3fea37def560aea920dcc5] ...
	I0927 00:37:03.002926  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 555dc55ff545e165f45bde68c31f0843d0f21041ba3fea37def560aea920dcc5"
	I0927 00:37:03.049813  559927 logs.go:123] Gathering logs for kube-proxy [5e3fe54c99e931cc6b0b654e967a2638c30374abdabe2c1174d5f6a3fff11315] ...
	I0927 00:37:03.049846  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e3fe54c99e931cc6b0b654e967a2638c30374abdabe2c1174d5f6a3fff11315"
	I0927 00:37:03.093274  559927 logs.go:123] Gathering logs for describe nodes ...
	I0927 00:37:03.093302  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 00:37:03.235228  559927 logs.go:123] Gathering logs for etcd [6b36b1e46732bafa997c4e66766a4bb0cd5ea7487006b7a6ba9e5860f1743a6e] ...
	I0927 00:37:03.235262  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b36b1e46732bafa997c4e66766a4bb0cd5ea7487006b7a6ba9e5860f1743a6e"
	I0927 00:37:03.286098  559927 logs.go:123] Gathering logs for coredns [1a8d7c13a871933275d3e84e87c063e55c9ed4adff23be36d5ea4bfa8accbcd6] ...
	I0927 00:37:03.286134  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a8d7c13a871933275d3e84e87c063e55c9ed4adff23be36d5ea4bfa8accbcd6"
	I0927 00:37:03.330375  559927 logs.go:123] Gathering logs for CRI-O ...
	I0927 00:37:03.330463  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 00:37:03.436949  559927 out.go:358] Setting ErrFile to fd 2...
	I0927 00:37:03.436986  559927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 00:37:03.437055  559927 out.go:270] X Problems detected in kubelet:
	W0927 00:37:03.437072  559927 out.go:270]   Sep 27 00:34:35 addons-220192 kubelet[1511]: E0927 00:34:35.916217    1511 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-220192\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-220192' and this object" logger="UnhandledError"
	W0927 00:37:03.437086  559927 out.go:270]   Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.200959    1511 projected.go:194] Error preparing data for projected volume kube-api-access-8sq56 for pod kube-system/kindnet-4rr4t: [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0927 00:37:03.437094  559927 out.go:270]   Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.201058    1511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/afd40f83-7a79-4edc-bbfc-ff6936a3158e-kube-api-access-8sq56 podName:afd40f83-7a79-4edc-bbfc-ff6936a3158e nodeName:}" failed. No retries permitted until 2024-09-27 00:34:37.70103236 +0000 UTC m=+8.728897654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8sq56" (UniqueName: "kubernetes.io/projected/afd40f83-7a79-4edc-bbfc-ff6936a3158e-kube-api-access-8sq56") pod "kindnet-4rr4t" (UID: "afd40f83-7a79-4edc-bbfc-ff6936a3158e") : [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0927 00:37:03.437105  559927 out.go:270]   Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.420880    1511 projected.go:194] Error preparing data for projected volume kube-api-access-pfjql for pod kube-system/kube-proxy-shqd9: [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0927 00:37:03.437111  559927 out.go:270]   Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.420948    1511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/476cb0de-772b-4e25-ac8c-7244a6d392e7-kube-api-access-pfjql podName:476cb0de-772b-4e25-ac8c-7244a6d392e7 nodeName:}" failed. No retries permitted until 2024-09-27 00:34:37.920927906 +0000 UTC m=+8.948793201 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pfjql" (UniqueName: "kubernetes.io/projected/476cb0de-772b-4e25-ac8c-7244a6d392e7-kube-api-access-pfjql") pod "kube-proxy-shqd9" (UID: "476cb0de-772b-4e25-ac8c-7244a6d392e7") : [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	I0927 00:37:03.437117  559927 out.go:358] Setting ErrFile to fd 2...
	I0927 00:37:03.437124  559927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:37:13.449086  559927 system_pods.go:59] 18 kube-system pods found
	I0927 00:37:13.449126  559927 system_pods.go:61] "coredns-7c65d6cfc9-wnhpd" [4f3b2231-030c-4af9-beae-7c98c13d01cd] Running
	I0927 00:37:13.449134  559927 system_pods.go:61] "csi-hostpath-attacher-0" [c49fd5b5-341f-441f-981c-70e3f7bccbff] Running
	I0927 00:37:13.449139  559927 system_pods.go:61] "csi-hostpath-resizer-0" [21888ecf-1320-496d-97d5-a0c1e85ce981] Running
	I0927 00:37:13.449143  559927 system_pods.go:61] "csi-hostpathplugin-pst4l" [ae3ecba5-af16-41fb-a4c3-bf2c43689e50] Running
	I0927 00:37:13.449148  559927 system_pods.go:61] "etcd-addons-220192" [94827fa0-c442-4e24-a83e-22de3bff65e3] Running
	I0927 00:37:13.449152  559927 system_pods.go:61] "kindnet-4rr4t" [afd40f83-7a79-4edc-bbfc-ff6936a3158e] Running
	I0927 00:37:13.449157  559927 system_pods.go:61] "kube-apiserver-addons-220192" [0bec6c78-990c-4ffb-be43-dfb155b147f7] Running
	I0927 00:37:13.449161  559927 system_pods.go:61] "kube-controller-manager-addons-220192" [1353546b-84d9-4cd3-938e-6734b6b3413b] Running
	I0927 00:37:13.449172  559927 system_pods.go:61] "kube-ingress-dns-minikube" [586c242e-8199-4142-985e-e89f7d01e3cc] Running
	I0927 00:37:13.449178  559927 system_pods.go:61] "kube-proxy-shqd9" [476cb0de-772b-4e25-ac8c-7244a6d392e7] Running
	I0927 00:37:13.449186  559927 system_pods.go:61] "kube-scheduler-addons-220192" [c391b3f7-ca7f-48e9-9cec-7188a266035f] Running
	I0927 00:37:13.449190  559927 system_pods.go:61] "metrics-server-84c5f94fbc-zpbj2" [1a96d0d6-2c40-4cd4-ba04-605e67d179f7] Running
	I0927 00:37:13.449195  559927 system_pods.go:61] "nvidia-device-plugin-daemonset-dqrvw" [e6729774-57a9-49c2-a405-b1a541551dd4] Running
	I0927 00:37:13.449199  559927 system_pods.go:61] "registry-66c9cd494c-7997r" [06852bd1-3230-4615-b6a1-8834e426e02d] Running
	I0927 00:37:13.449203  559927 system_pods.go:61] "registry-proxy-ld2gg" [44a3013c-bbfc-4d08-9ed4-a5160422cdf0] Running
	I0927 00:37:13.449210  559927 system_pods.go:61] "snapshot-controller-56fcc65765-b4j5p" [de8a8d5b-ab34-41cb-ac84-b1c9dd58a1ff] Running
	I0927 00:37:13.449215  559927 system_pods.go:61] "snapshot-controller-56fcc65765-w6xf7" [e8e9ea4c-ac11-4dc7-85aa-75c8b2eb463e] Running
	I0927 00:37:13.449221  559927 system_pods.go:61] "storage-provisioner" [20b521d2-cf72-4c64-997c-c30b932659a1] Running
	I0927 00:37:13.449227  559927 system_pods.go:74] duration metric: took 11.113013969s to wait for pod list to return data ...
	I0927 00:37:13.449235  559927 default_sa.go:34] waiting for default service account to be created ...
	I0927 00:37:13.451765  559927 default_sa.go:45] found service account: "default"
	I0927 00:37:13.451791  559927 default_sa.go:55] duration metric: took 2.546967ms for default service account to be created ...
	I0927 00:37:13.451801  559927 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 00:37:13.461994  559927 system_pods.go:86] 18 kube-system pods found
	I0927 00:37:13.462032  559927 system_pods.go:89] "coredns-7c65d6cfc9-wnhpd" [4f3b2231-030c-4af9-beae-7c98c13d01cd] Running
	I0927 00:37:13.462039  559927 system_pods.go:89] "csi-hostpath-attacher-0" [c49fd5b5-341f-441f-981c-70e3f7bccbff] Running
	I0927 00:37:13.462045  559927 system_pods.go:89] "csi-hostpath-resizer-0" [21888ecf-1320-496d-97d5-a0c1e85ce981] Running
	I0927 00:37:13.462050  559927 system_pods.go:89] "csi-hostpathplugin-pst4l" [ae3ecba5-af16-41fb-a4c3-bf2c43689e50] Running
	I0927 00:37:13.462054  559927 system_pods.go:89] "etcd-addons-220192" [94827fa0-c442-4e24-a83e-22de3bff65e3] Running
	I0927 00:37:13.462059  559927 system_pods.go:89] "kindnet-4rr4t" [afd40f83-7a79-4edc-bbfc-ff6936a3158e] Running
	I0927 00:37:13.462063  559927 system_pods.go:89] "kube-apiserver-addons-220192" [0bec6c78-990c-4ffb-be43-dfb155b147f7] Running
	I0927 00:37:13.462091  559927 system_pods.go:89] "kube-controller-manager-addons-220192" [1353546b-84d9-4cd3-938e-6734b6b3413b] Running
	I0927 00:37:13.462098  559927 system_pods.go:89] "kube-ingress-dns-minikube" [586c242e-8199-4142-985e-e89f7d01e3cc] Running
	I0927 00:37:13.462112  559927 system_pods.go:89] "kube-proxy-shqd9" [476cb0de-772b-4e25-ac8c-7244a6d392e7] Running
	I0927 00:37:13.462117  559927 system_pods.go:89] "kube-scheduler-addons-220192" [c391b3f7-ca7f-48e9-9cec-7188a266035f] Running
	I0927 00:37:13.462121  559927 system_pods.go:89] "metrics-server-84c5f94fbc-zpbj2" [1a96d0d6-2c40-4cd4-ba04-605e67d179f7] Running
	I0927 00:37:13.462131  559927 system_pods.go:89] "nvidia-device-plugin-daemonset-dqrvw" [e6729774-57a9-49c2-a405-b1a541551dd4] Running
	I0927 00:37:13.462136  559927 system_pods.go:89] "registry-66c9cd494c-7997r" [06852bd1-3230-4615-b6a1-8834e426e02d] Running
	I0927 00:37:13.462142  559927 system_pods.go:89] "registry-proxy-ld2gg" [44a3013c-bbfc-4d08-9ed4-a5160422cdf0] Running
	I0927 00:37:13.462149  559927 system_pods.go:89] "snapshot-controller-56fcc65765-b4j5p" [de8a8d5b-ab34-41cb-ac84-b1c9dd58a1ff] Running
	I0927 00:37:13.462179  559927 system_pods.go:89] "snapshot-controller-56fcc65765-w6xf7" [e8e9ea4c-ac11-4dc7-85aa-75c8b2eb463e] Running
	I0927 00:37:13.462189  559927 system_pods.go:89] "storage-provisioner" [20b521d2-cf72-4c64-997c-c30b932659a1] Running
	I0927 00:37:13.462197  559927 system_pods.go:126] duration metric: took 10.389744ms to wait for k8s-apps to be running ...
	I0927 00:37:13.462204  559927 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 00:37:13.462274  559927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 00:37:13.475870  559927 system_svc.go:56] duration metric: took 13.657024ms WaitForService to wait for kubelet
	I0927 00:37:13.475900  559927 kubeadm.go:582] duration metric: took 2m40.094897458s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 00:37:13.475921  559927 node_conditions.go:102] verifying NodePressure condition ...
	I0927 00:37:13.479550  559927 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0927 00:37:13.479579  559927 node_conditions.go:123] node cpu capacity is 2
	I0927 00:37:13.479592  559927 node_conditions.go:105] duration metric: took 3.664619ms to run NodePressure ...
	I0927 00:37:13.479604  559927 start.go:241] waiting for startup goroutines ...
	I0927 00:37:13.479611  559927 start.go:246] waiting for cluster config update ...
	I0927 00:37:13.479628  559927 start.go:255] writing updated cluster config ...
	I0927 00:37:13.479920  559927 ssh_runner.go:195] Run: rm -f paused
	I0927 00:37:13.906550  559927 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0927 00:37:13.908395  559927 out.go:177] * Done! kubectl is now configured to use "addons-220192" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 27 00:46:29 addons-220192 crio[964]: time="2024-09-27 00:46:29.247813153Z" level=info msg="Removed pod sandbox: 34844c135a5a0351c1581f9fb061bb1f320db27f2afbd81ea76a4c6a93e02e78" id=6300ed3d-fadf-4e54-872d-8f2e5ff59a22 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 27 00:46:29 addons-220192 crio[964]: time="2024-09-27 00:46:29.248265392Z" level=info msg="Stopping pod sandbox: 7a283627f538129baa8d9a3a7f4984d8f4e1a345aa5d1242ecb9a7575f394fa4" id=fa70c7e1-26f1-4192-a5bd-fb91731000a9 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 27 00:46:29 addons-220192 crio[964]: time="2024-09-27 00:46:29.248299427Z" level=info msg="Stopped pod sandbox (already stopped): 7a283627f538129baa8d9a3a7f4984d8f4e1a345aa5d1242ecb9a7575f394fa4" id=fa70c7e1-26f1-4192-a5bd-fb91731000a9 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 27 00:46:29 addons-220192 crio[964]: time="2024-09-27 00:46:29.248752364Z" level=info msg="Removing pod sandbox: 7a283627f538129baa8d9a3a7f4984d8f4e1a345aa5d1242ecb9a7575f394fa4" id=2faf54cb-9ba5-4e30-bd57-6bf9583ece01 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 27 00:46:29 addons-220192 crio[964]: time="2024-09-27 00:46:29.274082325Z" level=info msg="Removed pod sandbox: 7a283627f538129baa8d9a3a7f4984d8f4e1a345aa5d1242ecb9a7575f394fa4" id=2faf54cb-9ba5-4e30-bd57-6bf9583ece01 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 27 00:46:29 addons-220192 crio[964]: time="2024-09-27 00:46:29.274463633Z" level=info msg="Stopping pod sandbox: a0705dd1a75334273eaf29bae6226dcbeb734adaf04faca5364b828ff13df63e" id=748c80e6-76e9-4e8a-b45a-d877ffc61b51 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 27 00:46:29 addons-220192 crio[964]: time="2024-09-27 00:46:29.274493860Z" level=info msg="Stopped pod sandbox (already stopped): a0705dd1a75334273eaf29bae6226dcbeb734adaf04faca5364b828ff13df63e" id=748c80e6-76e9-4e8a-b45a-d877ffc61b51 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 27 00:46:29 addons-220192 crio[964]: time="2024-09-27 00:46:29.274903984Z" level=info msg="Removing pod sandbox: a0705dd1a75334273eaf29bae6226dcbeb734adaf04faca5364b828ff13df63e" id=7ba6fc91-fdb4-4dfa-a697-9bd9fef9f077 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 27 00:46:29 addons-220192 crio[964]: time="2024-09-27 00:46:29.289273323Z" level=info msg="Removed pod sandbox: a0705dd1a75334273eaf29bae6226dcbeb734adaf04faca5364b828ff13df63e" id=7ba6fc91-fdb4-4dfa-a697-9bd9fef9f077 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 27 00:46:29 addons-220192 crio[964]: time="2024-09-27 00:46:29.289747265Z" level=info msg="Stopping pod sandbox: 15543e1ee4f395761cbf878de559d9a1c3cd04085f87c200ad9da0dcbed58051" id=27f1afcd-e73f-4511-b7f9-d5578912b87f name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 27 00:46:29 addons-220192 crio[964]: time="2024-09-27 00:46:29.289784803Z" level=info msg="Stopped pod sandbox (already stopped): 15543e1ee4f395761cbf878de559d9a1c3cd04085f87c200ad9da0dcbed58051" id=27f1afcd-e73f-4511-b7f9-d5578912b87f name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 27 00:46:29 addons-220192 crio[964]: time="2024-09-27 00:46:29.290089296Z" level=info msg="Removing pod sandbox: 15543e1ee4f395761cbf878de559d9a1c3cd04085f87c200ad9da0dcbed58051" id=76538507-1def-42a4-8c9c-0f731e6b3b05 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 27 00:46:29 addons-220192 crio[964]: time="2024-09-27 00:46:29.299990366Z" level=info msg="Removed pod sandbox: 15543e1ee4f395761cbf878de559d9a1c3cd04085f87c200ad9da0dcbed58051" id=76538507-1def-42a4-8c9c-0f731e6b3b05 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 27 00:46:29 addons-220192 crio[964]: time="2024-09-27 00:46:29.300435820Z" level=info msg="Stopping pod sandbox: e70a5bff27f509889219f5a4bf2f07fa13b50c26cd7ed33724324f372c684956" id=2fb8667a-64c8-424f-bb82-5b0cde878a00 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 27 00:46:29 addons-220192 crio[964]: time="2024-09-27 00:46:29.300473907Z" level=info msg="Stopped pod sandbox (already stopped): e70a5bff27f509889219f5a4bf2f07fa13b50c26cd7ed33724324f372c684956" id=2fb8667a-64c8-424f-bb82-5b0cde878a00 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 27 00:46:29 addons-220192 crio[964]: time="2024-09-27 00:46:29.300862969Z" level=info msg="Removing pod sandbox: e70a5bff27f509889219f5a4bf2f07fa13b50c26cd7ed33724324f372c684956" id=eeaf1fb5-04c7-47b2-8c71-9d7667a7c79b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 27 00:46:29 addons-220192 crio[964]: time="2024-09-27 00:46:29.311332321Z" level=info msg="Removed pod sandbox: e70a5bff27f509889219f5a4bf2f07fa13b50c26cd7ed33724324f372c684956" id=eeaf1fb5-04c7-47b2-8c71-9d7667a7c79b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 27 00:46:29 addons-220192 crio[964]: time="2024-09-27 00:46:29.312338780Z" level=info msg="Stopping pod sandbox: f3771c8311bb3d7b949a1a590be4c3d6ce20ed4c96bf0af9a0fba08521f3db8c" id=b9e84577-67ec-4355-8b36-6b8289fb485a name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 27 00:46:29 addons-220192 crio[964]: time="2024-09-27 00:46:29.312458424Z" level=info msg="Stopped pod sandbox (already stopped): f3771c8311bb3d7b949a1a590be4c3d6ce20ed4c96bf0af9a0fba08521f3db8c" id=b9e84577-67ec-4355-8b36-6b8289fb485a name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 27 00:46:29 addons-220192 crio[964]: time="2024-09-27 00:46:29.312857881Z" level=info msg="Removing pod sandbox: f3771c8311bb3d7b949a1a590be4c3d6ce20ed4c96bf0af9a0fba08521f3db8c" id=00744ada-5377-406d-9ce8-c3a0ddc023a3 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 27 00:46:29 addons-220192 crio[964]: time="2024-09-27 00:46:29.324200485Z" level=info msg="Removed pod sandbox: f3771c8311bb3d7b949a1a590be4c3d6ce20ed4c96bf0af9a0fba08521f3db8c" id=00744ada-5377-406d-9ce8-c3a0ddc023a3 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 27 00:46:29 addons-220192 crio[964]: time="2024-09-27 00:46:29.324841775Z" level=info msg="Stopping pod sandbox: eee82845acb74cdfa4be95dbe30846528f9ea14910c6869f5988d228f8e4544c" id=575ef233-c4d2-4ff2-a1b3-a39bc72fa4c7 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 27 00:46:29 addons-220192 crio[964]: time="2024-09-27 00:46:29.325043952Z" level=info msg="Stopped pod sandbox (already stopped): eee82845acb74cdfa4be95dbe30846528f9ea14910c6869f5988d228f8e4544c" id=575ef233-c4d2-4ff2-a1b3-a39bc72fa4c7 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 27 00:46:29 addons-220192 crio[964]: time="2024-09-27 00:46:29.325476106Z" level=info msg="Removing pod sandbox: eee82845acb74cdfa4be95dbe30846528f9ea14910c6869f5988d228f8e4544c" id=834b3b58-62a2-43a6-9dab-7f3a25444c68 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 27 00:46:29 addons-220192 crio[964]: time="2024-09-27 00:46:29.339537063Z" level=info msg="Removed pod sandbox: eee82845acb74cdfa4be95dbe30846528f9ea14910c6869f5988d228f8e4544c" id=834b3b58-62a2-43a6-9dab-7f3a25444c68 name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	27c0aa5661dc1       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec            12 seconds ago      Exited              gadget                     7                   76fc6dfddfdd1       gadget-hr4wl
	f353e2f491f91       registry.k8s.io/ingress-nginx/controller@sha256:22f9d129ae8c89a2cabbd13af3c1668944f3dd68fec186199b7024a0a2fc75b3             10 minutes ago      Running             controller                 0                   073f37da810c0       ingress-nginx-controller-bc57996ff-45pzp
	f79bc824b8278       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69                 10 minutes ago      Running             gcp-auth                   0                   c3d022a3b14c6       gcp-auth-89d5ffd79-6m9rp
	7d205c93f0684       nvcr.io/nvidia/k8s-device-plugin@sha256:cdd05f9d89f0552478d46474005e86b98795ad364664f644225b99d94978e680                     10 minutes ago      Running             nvidia-device-plugin-ctr   0                   a3d67546ff1f7       nvidia-device-plugin-daemonset-dqrvw
	32362458a9252       420193b27261a8d37b9fb1faeed45094cefa47e72a7538fd5a6c05e8b5ce192e                                                             10 minutes ago      Exited              patch                      2                   4e77855fde36c       ingress-nginx-admission-patch-rbwjb
	b41d6538fc0e2       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:7c4c1a6ca8855c524a64983eaf590e126a669ae12df83ad65de281c9beee13d3   10 minutes ago      Exited              create                     0                   749067cfde9c6       ingress-nginx-admission-create-cp22f
	fc011aec16aeb       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                              10 minutes ago      Running             yakd                       0                   af034f9d51002       yakd-dashboard-67d98fc6b-rxkjm
	bc524d9595882       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98             10 minutes ago      Running             local-path-provisioner     0                   ec2cf1c475ba2       local-path-provisioner-86d989889c-7czzf
	4856201f50285       gcr.io/cloud-spanner-emulator/emulator@sha256:6ce1265c73355797b34d2531c7146eed3996346f860517e35d1434182eb5f01d               10 minutes ago      Running             cloud-spanner-emulator     0                   44bcf8e3d7877       cloud-spanner-emulator-5b584cc74-4hjb6
	880e241766c14       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f        10 minutes ago      Running             metrics-server             0                   8cbcf8b4931cd       metrics-server-84c5f94fbc-zpbj2
	17b4809fb1c3e       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             11 minutes ago      Running             minikube-ingress-dns       0                   b772ae1ebf9cf       kube-ingress-dns-minikube
	75b98e47380ef       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             11 minutes ago      Running             storage-provisioner        0                   794276bcaa01b       storage-provisioner
	1a8d7c13a8719       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                             11 minutes ago      Running             coredns                    0                   ef54c3fa3cd28       coredns-7c65d6cfc9-wnhpd
	5e3fe54c99e93       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d                                                             11 minutes ago      Running             kube-proxy                 0                   16758e5c05deb       kube-proxy-shqd9
	d7a7261efecf3       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51                                                             11 minutes ago      Running             kindnet-cni                0                   39c54e6136da4       kindnet-4rr4t
	04b9c719c715f       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853                                                             12 minutes ago      Running             kube-apiserver             0                   e263f38ae3b5e       kube-apiserver-addons-220192
	555dc55ff545e       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d                                                             12 minutes ago      Running             kube-scheduler             0                   e432a0cbdf14f       kube-scheduler-addons-220192
	2bfc8d78fdf58       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e                                                             12 minutes ago      Running             kube-controller-manager    0                   75ef397915466       kube-controller-manager-addons-220192
	6b36b1e46732b       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                             12 minutes ago      Running             etcd                       0                   8a08dc7f6d87c       etcd-addons-220192
	
	
	==> coredns [1a8d7c13a871933275d3e84e87c063e55c9ed4adff23be36d5ea4bfa8accbcd6] <==
	[INFO] 10.244.0.17:32921 - 15145 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00009599s
	[INFO] 10.244.0.17:32921 - 19537 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002489894s
	[INFO] 10.244.0.17:32921 - 61082 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002491617s
	[INFO] 10.244.0.17:32921 - 31100 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000128301s
	[INFO] 10.244.0.17:32921 - 35939 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000126651s
	[INFO] 10.244.0.17:41730 - 50927 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000109577s
	[INFO] 10.244.0.17:41730 - 51164 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000183225s
	[INFO] 10.244.0.17:33425 - 39515 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000088917s
	[INFO] 10.244.0.17:33425 - 39334 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000158479s
	[INFO] 10.244.0.17:42680 - 3435 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000165895s
	[INFO] 10.244.0.17:42680 - 3246 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000204483s
	[INFO] 10.244.0.17:41066 - 45139 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001539254s
	[INFO] 10.244.0.17:41066 - 44967 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001594653s
	[INFO] 10.244.0.17:35895 - 35537 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000064679s
	[INFO] 10.244.0.17:35895 - 35134 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000060282s
	[INFO] 10.244.0.20:38814 - 12571 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000166667s
	[INFO] 10.244.0.20:57837 - 31175 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000084199s
	[INFO] 10.244.0.20:59015 - 52667 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000144571s
	[INFO] 10.244.0.20:43948 - 22611 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000081081s
	[INFO] 10.244.0.20:39471 - 5951 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000114837s
	[INFO] 10.244.0.20:53453 - 53244 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000079014s
	[INFO] 10.244.0.20:50375 - 42686 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002646412s
	[INFO] 10.244.0.20:38002 - 62070 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002044169s
	[INFO] 10.244.0.20:54992 - 48913 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001395109s
	[INFO] 10.244.0.20:42555 - 4765 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002338735s
	
	
	==> describe nodes <==
	Name:               addons-220192
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-220192
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=addons-220192
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_27T00_34_30_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-220192
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:34:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-220192
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 00:46:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 00:46:02 +0000   Fri, 27 Sep 2024 00:34:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 00:46:02 +0000   Fri, 27 Sep 2024 00:34:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 00:46:02 +0000   Fri, 27 Sep 2024 00:34:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 00:46:02 +0000   Fri, 27 Sep 2024 00:35:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-220192
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 6db0b236675141869357d8bd6acda62f
	  System UUID:                96d22be3-917a-4ba2-9d29-91009fed055d
	  Boot ID:                    7df4580f-f941-474d-8050-3bbd7f78d321
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m16s
	  default                     cloud-spanner-emulator-5b584cc74-4hjb6      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gadget                      gadget-hr4wl                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gcp-auth                    gcp-auth-89d5ffd79-6m9rp                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-45pzp    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-wnhpd                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     11m
	  kube-system                 etcd-addons-220192                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-4rr4t                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-addons-220192                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-220192       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-shqd9                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-addons-220192                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-zpbj2             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         11m
	  kube-system                 nvidia-device-plugin-daemonset-dqrvw        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  local-path-storage          local-path-provisioner-86d989889c-7czzf     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-rxkjm              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 11m   kube-proxy       
	  Normal   Starting                 12m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  12m   kubelet          Node addons-220192 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m   kubelet          Node addons-220192 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m   kubelet          Node addons-220192 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m   node-controller  Node addons-220192 event: Registered Node addons-220192 in Controller
	  Normal   NodeReady                11m   kubelet          Node addons-220192 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep26 22:08] overlayfs: '/var/lib/containers/storage/overlay/l/ZLTOCNGE2IGM6DT7VP2QP7OV3M' not a directory
	[  +0.694148] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	[Sep27 00:06] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [6b36b1e46732bafa997c4e66766a4bb0cd5ea7487006b7a6ba9e5860f1743a6e] <==
	{"level":"warn","ts":"2024-09-27T00:34:36.642115Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.300727ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kindnet\" ","response":"range_response_count:1 size:4619"}
	{"level":"info","ts":"2024-09-27T00:34:36.655610Z","caller":"traceutil/trace.go:171","msg":"trace[277343402] range","detail":"{range_begin:/registry/daemonsets/kube-system/kindnet; range_end:; response_count:1; response_revision:336; }","duration":"117.031913ms","start":"2024-09-27T00:34:36.538562Z","end":"2024-09-27T00:34:36.655594Z","steps":["trace[277343402] 'agreement among raft nodes before linearized reading'  (duration: 68.259623ms)","trace[277343402] 'range keys from in-memory index tree'  (duration: 35.006627ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-27T00:34:36.705293Z","caller":"traceutil/trace.go:171","msg":"trace[706582313] transaction","detail":"{read_only:false; response_revision:338; number_of_response:1; }","duration":"123.045604ms","start":"2024-09-27T00:34:36.582228Z","end":"2024-09-27T00:34:36.705274Z","steps":["trace[706582313] 'process raft request'  (duration: 120.583492ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:34:36.705678Z","caller":"traceutil/trace.go:171","msg":"trace[75754528] transaction","detail":"{read_only:false; response_revision:337; number_of_response:1; }","duration":"123.945357ms","start":"2024-09-27T00:34:36.581722Z","end":"2024-09-27T00:34:36.705667Z","steps":["trace[75754528] 'process raft request'  (duration: 83.586816ms)","trace[75754528] 'compare'  (duration: 37.390308ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-27T00:34:36.707643Z","caller":"traceutil/trace.go:171","msg":"trace[1978378721] transaction","detail":"{read_only:false; response_revision:340; number_of_response:1; }","duration":"119.988454ms","start":"2024-09-27T00:34:36.587640Z","end":"2024-09-27T00:34:36.707629Z","steps":["trace[1978378721] 'process raft request'  (duration: 115.241317ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:34:36.707788Z","caller":"traceutil/trace.go:171","msg":"trace[245549885] transaction","detail":"{read_only:false; response_revision:339; number_of_response:1; }","duration":"125.39105ms","start":"2024-09-27T00:34:36.582391Z","end":"2024-09-27T00:34:36.707782Z","steps":["trace[245549885] 'process raft request'  (duration: 120.456628ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:34:36.708110Z","caller":"traceutil/trace.go:171","msg":"trace[386138567] transaction","detail":"{read_only:false; response_revision:341; number_of_response:1; }","duration":"101.159781ms","start":"2024-09-27T00:34:36.606943Z","end":"2024-09-27T00:34:36.708103Z","steps":["trace[386138567] 'process raft request'  (duration: 95.968996ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:34:36.708135Z","caller":"traceutil/trace.go:171","msg":"trace[1196894315] linearizableReadLoop","detail":"{readStateIndex:349; appliedIndex:344; }","duration":"118.831173ms","start":"2024-09-27T00:34:36.589299Z","end":"2024-09-27T00:34:36.708130Z","steps":["trace[1196894315] 'read index received'  (duration: 75.87577ms)","trace[1196894315] 'applied index is now lower than readState.Index'  (duration: 42.954746ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-27T00:34:36.708195Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"157.336367ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-09-27T00:34:36.761229Z","caller":"traceutil/trace.go:171","msg":"trace[1688764860] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:342; }","duration":"210.35389ms","start":"2024-09-27T00:34:36.550840Z","end":"2024-09-27T00:34:36.761194Z","steps":["trace[1688764860] 'agreement among raft nodes before linearized reading'  (duration: 157.305008ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:34:36.708247Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.11481ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T00:34:36.767505Z","caller":"traceutil/trace.go:171","msg":"trace[1525429030] range","detail":"{range_begin:/registry/resourcequotas; range_end:; response_count:0; response_revision:342; }","duration":"160.358433ms","start":"2024-09-27T00:34:36.607124Z","end":"2024-09-27T00:34:36.767483Z","steps":["trace[1525429030] 'agreement among raft nodes before linearized reading'  (duration: 101.104668ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:34:36.708321Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.890179ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2024-09-27T00:34:36.767882Z","caller":"traceutil/trace.go:171","msg":"trace[1578121818] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:342; }","duration":"212.443063ms","start":"2024-09-27T00:34:36.555427Z","end":"2024-09-27T00:34:36.767870Z","steps":["trace[1578121818] 'agreement among raft nodes before linearized reading'  (duration: 152.878143ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:34:36.708269Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.946598ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T00:34:36.768392Z","caller":"traceutil/trace.go:171","msg":"trace[605102501] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:342; }","duration":"186.060789ms","start":"2024-09-27T00:34:36.582319Z","end":"2024-09-27T00:34:36.768380Z","steps":["trace[605102501] 'agreement among raft nodes before linearized reading'  (duration: 125.936571ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:34:36.708296Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.170953ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:3684"}
	{"level":"warn","ts":"2024-09-27T00:34:36.718091Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"167.195588ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-09-27T00:34:36.783455Z","caller":"traceutil/trace.go:171","msg":"trace[448706870] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:342; }","duration":"232.572784ms","start":"2024-09-27T00:34:36.550868Z","end":"2024-09-27T00:34:36.783441Z","steps":["trace[448706870] 'agreement among raft nodes before linearized reading'  (duration: 167.157426ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:34:36.784208Z","caller":"traceutil/trace.go:171","msg":"trace[557653940] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:342; }","duration":"202.077926ms","start":"2024-09-27T00:34:36.582121Z","end":"2024-09-27T00:34:36.784199Z","steps":["trace[557653940] 'agreement among raft nodes before linearized reading'  (duration: 126.155561ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:34:37.377526Z","caller":"traceutil/trace.go:171","msg":"trace[877986833] transaction","detail":"{read_only:false; response_revision:364; number_of_response:1; }","duration":"141.52327ms","start":"2024-09-27T00:34:37.235983Z","end":"2024-09-27T00:34:37.377506Z","steps":["trace[877986833] 'process raft request'  (duration: 50.41116ms)","trace[877986833] 'compare'  (duration: 90.99976ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-27T00:34:37.378234Z","caller":"traceutil/trace.go:171","msg":"trace[1370352228] transaction","detail":"{read_only:false; response_revision:365; number_of_response:1; }","duration":"141.929496ms","start":"2024-09-27T00:34:37.236293Z","end":"2024-09-27T00:34:37.378223Z","steps":["trace[1370352228] 'process raft request'  (duration: 141.866039ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:44:24.160820Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1524}
	{"level":"info","ts":"2024-09-27T00:44:24.194054Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1524,"took":"32.739963ms","hash":154592831,"current-db-size-bytes":6713344,"current-db-size":"6.7 MB","current-db-size-in-use-bytes":3227648,"current-db-size-in-use":"3.2 MB"}
	{"level":"info","ts":"2024-09-27T00:44:24.194100Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":154592831,"revision":1524,"compact-revision":-1}
	
	
	==> gcp-auth [f79bc824b8278bffc4be0ad3ad49df8f62945f0be7f07c2e7eba40dd9ed2637d] <==
	2024/09/27 00:36:10 GCP Auth Webhook started!
	2024/09/27 00:37:14 Ready to marshal response ...
	2024/09/27 00:37:14 Ready to write response ...
	2024/09/27 00:37:14 Ready to marshal response ...
	2024/09/27 00:37:14 Ready to write response ...
	2024/09/27 00:37:14 Ready to marshal response ...
	2024/09/27 00:37:14 Ready to write response ...
	2024/09/27 00:45:18 Ready to marshal response ...
	2024/09/27 00:45:18 Ready to write response ...
	2024/09/27 00:45:18 Ready to marshal response ...
	2024/09/27 00:45:18 Ready to write response ...
	2024/09/27 00:45:18 Ready to marshal response ...
	2024/09/27 00:45:18 Ready to write response ...
	2024/09/27 00:45:27 Ready to marshal response ...
	2024/09/27 00:45:27 Ready to write response ...
	2024/09/27 00:45:53 Ready to marshal response ...
	2024/09/27 00:45:53 Ready to write response ...
	2024/09/27 00:46:09 Ready to marshal response ...
	2024/09/27 00:46:09 Ready to write response ...
	
	
	==> kernel <==
	 00:46:30 up  4:28,  0 users,  load average: 0.84, 0.59, 1.27
	Linux addons-220192 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [d7a7261efecf3162ccc2d26ed432451c900af8b4d1487407d7ce2be5094281b5] <==
	I0927 00:44:28.619887       1 main.go:299] handling current node
	I0927 00:44:38.619771       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:44:38.619812       1 main.go:299] handling current node
	I0927 00:44:48.619416       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:44:48.619453       1 main.go:299] handling current node
	I0927 00:44:58.619888       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:44:58.619921       1 main.go:299] handling current node
	I0927 00:45:08.619883       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:45:08.619917       1 main.go:299] handling current node
	I0927 00:45:18.620011       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:45:18.620045       1 main.go:299] handling current node
	I0927 00:45:28.619833       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:45:28.619937       1 main.go:299] handling current node
	I0927 00:45:38.619720       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:45:38.619754       1 main.go:299] handling current node
	I0927 00:45:48.620000       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:45:48.620034       1 main.go:299] handling current node
	I0927 00:45:58.619541       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:45:58.619572       1 main.go:299] handling current node
	I0927 00:46:08.619882       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:46:08.620000       1 main.go:299] handling current node
	I0927 00:46:18.619882       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:46:18.619992       1 main.go:299] handling current node
	I0927 00:46:28.620057       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:46:28.620124       1 main.go:299] handling current node
	
	
	==> kube-apiserver [04b9c719c715f318e0da018097c22f147000bd0fb64d781731fa9eb3b3c51395] <==
	I0927 00:35:39.704587       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0927 00:35:39.704646       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0927 00:36:39.624166       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.158.28:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.158.28:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.158.28:443: connect: connection refused" logger="UnhandledError"
	W0927 00:36:39.624997       1 handler_proxy.go:99] no RequestInfo found in the context
	E0927 00:36:39.625074       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0927 00:36:39.626030       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.158.28:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.158.28:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.158.28:443: connect: connection refused" logger="UnhandledError"
	I0927 00:36:39.717054       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0927 00:45:18.452440       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.96.241"}
	I0927 00:46:03.777180       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0927 00:46:25.606817       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 00:46:25.606863       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0927 00:46:25.636258       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 00:46:25.636318       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0927 00:46:25.715476       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 00:46:25.715518       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0927 00:46:25.735524       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 00:46:25.735605       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0927 00:46:25.743258       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 00:46:25.743298       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0927 00:46:26.719243       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0927 00:46:26.744004       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0927 00:46:26.865842       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [2bfc8d78fdf58256c4a5925537af21cdbf3dbd66127f8a15b8101f92fb8a78c9] <==
	I0927 00:45:22.407214       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="63.605µs"
	I0927 00:45:22.434050       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="12.086151ms"
	I0927 00:45:22.434128       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="35.339µs"
	I0927 00:45:29.059797       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="3.766µs"
	I0927 00:45:32.235961       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-220192"
	I0927 00:45:39.181705       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I0927 00:46:02.867342       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-220192"
	I0927 00:46:19.032635       1 stateful_set.go:466] "StatefulSet has been deleted" logger="statefulset-controller" key="kube-system/csi-hostpath-attacher"
	I0927 00:46:19.190399       1 stateful_set.go:466] "StatefulSet has been deleted" logger="statefulset-controller" key="kube-system/csi-hostpath-resizer"
	I0927 00:46:19.606211       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-220192"
	I0927 00:46:25.785559       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/snapshot-controller-56fcc65765" duration="4.653µs"
	E0927 00:46:26.722442       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0927 00:46:26.745840       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0927 00:46:26.867493       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:46:27.528025       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:46:27.528068       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:46:28.106923       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:46:28.106966       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:46:28.184229       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:46:28.184271       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0927 00:46:28.492953       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="5.415µs"
	W0927 00:46:30.072968       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:46:30.073107       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:46:30.387240       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:46:30.387280       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [5e3fe54c99e931cc6b0b654e967a2638c30374abdabe2c1174d5f6a3fff11315] <==
	I0927 00:34:38.907788       1 server_linux.go:66] "Using iptables proxy"
	I0927 00:34:39.331001       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0927 00:34:39.331159       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 00:34:39.614187       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0927 00:34:39.614314       1 server_linux.go:169] "Using iptables Proxier"
	I0927 00:34:39.617555       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 00:34:39.625699       1 server.go:483] "Version info" version="v1.31.1"
	I0927 00:34:39.625787       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 00:34:39.645465       1 config.go:199] "Starting service config controller"
	I0927 00:34:39.650076       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 00:34:39.645886       1 config.go:105] "Starting endpoint slice config controller"
	I0927 00:34:39.650198       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 00:34:39.648423       1 config.go:328] "Starting node config controller"
	I0927 00:34:39.650407       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 00:34:39.750364       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 00:34:39.751607       1 shared_informer.go:320] Caches are synced for service config
	I0927 00:34:39.751679       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [555dc55ff545e165f45bde68c31f0843d0f21041ba3fea37def560aea920dcc5] <==
	W0927 00:34:26.563980       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0927 00:34:26.564047       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:34:26.564158       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0927 00:34:26.564555       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:34:26.564370       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0927 00:34:26.564682       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:34:26.564768       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0927 00:34:26.564871       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:34:26.564418       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0927 00:34:26.564995       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:34:26.564470       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0927 00:34:26.565087       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:34:26.564528       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0927 00:34:26.565193       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 00:34:27.425835       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0927 00:34:27.425963       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0927 00:34:27.457379       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0927 00:34:27.457505       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:34:27.578493       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0927 00:34:27.578645       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 00:34:27.626921       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0927 00:34:27.627048       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:34:27.640709       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0927 00:34:27.640830       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0927 00:34:29.347645       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 27 00:46:26 addons-220192 kubelet[1511]: E0927 00:46:26.704594    1511 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c6e062ee563f9135c23d8b3584ea793af69e997ba8c24adcf024cf4c3a8e589\": container with ID starting with 9c6e062ee563f9135c23d8b3584ea793af69e997ba8c24adcf024cf4c3a8e589 not found: ID does not exist" containerID="9c6e062ee563f9135c23d8b3584ea793af69e997ba8c24adcf024cf4c3a8e589"
	Sep 27 00:46:26 addons-220192 kubelet[1511]: I0927 00:46:26.704633    1511 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c6e062ee563f9135c23d8b3584ea793af69e997ba8c24adcf024cf4c3a8e589"} err="failed to get container status \"9c6e062ee563f9135c23d8b3584ea793af69e997ba8c24adcf024cf4c3a8e589\": rpc error: code = NotFound desc = could not find container \"9c6e062ee563f9135c23d8b3584ea793af69e997ba8c24adcf024cf4c3a8e589\": container with ID starting with 9c6e062ee563f9135c23d8b3584ea793af69e997ba8c24adcf024cf4c3a8e589 not found: ID does not exist"
	Sep 27 00:46:26 addons-220192 kubelet[1511]: I0927 00:46:26.704658    1511 scope.go:117] "RemoveContainer" containerID="2b63efaf7222add2a4062dd01e9ca4242b7345a6a2d7076a6a7e925832e09675"
	Sep 27 00:46:26 addons-220192 kubelet[1511]: I0927 00:46:26.725013    1511 scope.go:117] "RemoveContainer" containerID="2b63efaf7222add2a4062dd01e9ca4242b7345a6a2d7076a6a7e925832e09675"
	Sep 27 00:46:26 addons-220192 kubelet[1511]: E0927 00:46:26.725576    1511 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b63efaf7222add2a4062dd01e9ca4242b7345a6a2d7076a6a7e925832e09675\": container with ID starting with 2b63efaf7222add2a4062dd01e9ca4242b7345a6a2d7076a6a7e925832e09675 not found: ID does not exist" containerID="2b63efaf7222add2a4062dd01e9ca4242b7345a6a2d7076a6a7e925832e09675"
	Sep 27 00:46:26 addons-220192 kubelet[1511]: I0927 00:46:26.725607    1511 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b63efaf7222add2a4062dd01e9ca4242b7345a6a2d7076a6a7e925832e09675"} err="failed to get container status \"2b63efaf7222add2a4062dd01e9ca4242b7345a6a2d7076a6a7e925832e09675\": rpc error: code = NotFound desc = could not find container \"2b63efaf7222add2a4062dd01e9ca4242b7345a6a2d7076a6a7e925832e09675\": container with ID starting with 2b63efaf7222add2a4062dd01e9ca4242b7345a6a2d7076a6a7e925832e09675 not found: ID does not exist"
	Sep 27 00:46:27 addons-220192 kubelet[1511]: I0927 00:46:27.075118    1511 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de8a8d5b-ab34-41cb-ac84-b1c9dd58a1ff" path="/var/lib/kubelet/pods/de8a8d5b-ab34-41cb-ac84-b1c9dd58a1ff/volumes"
	Sep 27 00:46:27 addons-220192 kubelet[1511]: I0927 00:46:27.075507    1511 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8e9ea4c-ac11-4dc7-85aa-75c8b2eb463e" path="/var/lib/kubelet/pods/e8e9ea4c-ac11-4dc7-85aa-75c8b2eb463e/volumes"
	Sep 27 00:46:27 addons-220192 kubelet[1511]: I0927 00:46:27.933684    1511 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtt2k\" (UniqueName: \"kubernetes.io/projected/69485328-b5c9-4fa1-9385-f065e8dc91b6-kube-api-access-xtt2k\") pod \"69485328-b5c9-4fa1-9385-f065e8dc91b6\" (UID: \"69485328-b5c9-4fa1-9385-f065e8dc91b6\") "
	Sep 27 00:46:27 addons-220192 kubelet[1511]: I0927 00:46:27.933735    1511 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/69485328-b5c9-4fa1-9385-f065e8dc91b6-gcp-creds\") pod \"69485328-b5c9-4fa1-9385-f065e8dc91b6\" (UID: \"69485328-b5c9-4fa1-9385-f065e8dc91b6\") "
	Sep 27 00:46:27 addons-220192 kubelet[1511]: I0927 00:46:27.933871    1511 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69485328-b5c9-4fa1-9385-f065e8dc91b6-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "69485328-b5c9-4fa1-9385-f065e8dc91b6" (UID: "69485328-b5c9-4fa1-9385-f065e8dc91b6"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 27 00:46:27 addons-220192 kubelet[1511]: I0927 00:46:27.939080    1511 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69485328-b5c9-4fa1-9385-f065e8dc91b6-kube-api-access-xtt2k" (OuterVolumeSpecName: "kube-api-access-xtt2k") pod "69485328-b5c9-4fa1-9385-f065e8dc91b6" (UID: "69485328-b5c9-4fa1-9385-f065e8dc91b6"). InnerVolumeSpecName "kube-api-access-xtt2k". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 27 00:46:28 addons-220192 kubelet[1511]: I0927 00:46:28.034637    1511 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-xtt2k\" (UniqueName: \"kubernetes.io/projected/69485328-b5c9-4fa1-9385-f065e8dc91b6-kube-api-access-xtt2k\") on node \"addons-220192\" DevicePath \"\""
	Sep 27 00:46:28 addons-220192 kubelet[1511]: I0927 00:46:28.034682    1511 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/69485328-b5c9-4fa1-9385-f065e8dc91b6-gcp-creds\") on node \"addons-220192\" DevicePath \"\""
	Sep 27 00:46:28 addons-220192 kubelet[1511]: I0927 00:46:28.693084    1511 scope.go:117] "RemoveContainer" containerID="499cb4706f976569caf985acd513e70f28e3bcce53e3ca9ba02b2bf20ed93b37"
	Sep 27 00:46:28 addons-220192 kubelet[1511]: I0927 00:46:28.740297    1511 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2z4fl\" (UniqueName: \"kubernetes.io/projected/06852bd1-3230-4615-b6a1-8834e426e02d-kube-api-access-2z4fl\") pod \"06852bd1-3230-4615-b6a1-8834e426e02d\" (UID: \"06852bd1-3230-4615-b6a1-8834e426e02d\") "
	Sep 27 00:46:28 addons-220192 kubelet[1511]: I0927 00:46:28.757005    1511 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06852bd1-3230-4615-b6a1-8834e426e02d-kube-api-access-2z4fl" (OuterVolumeSpecName: "kube-api-access-2z4fl") pod "06852bd1-3230-4615-b6a1-8834e426e02d" (UID: "06852bd1-3230-4615-b6a1-8834e426e02d"). InnerVolumeSpecName "kube-api-access-2z4fl". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 27 00:46:28 addons-220192 kubelet[1511]: I0927 00:46:28.840921    1511 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-2z4fl\" (UniqueName: \"kubernetes.io/projected/06852bd1-3230-4615-b6a1-8834e426e02d-kube-api-access-2z4fl\") on node \"addons-220192\" DevicePath \"\""
	Sep 27 00:46:28 addons-220192 kubelet[1511]: I0927 00:46:28.941591    1511 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hp9q\" (UniqueName: \"kubernetes.io/projected/44a3013c-bbfc-4d08-9ed4-a5160422cdf0-kube-api-access-4hp9q\") pod \"44a3013c-bbfc-4d08-9ed4-a5160422cdf0\" (UID: \"44a3013c-bbfc-4d08-9ed4-a5160422cdf0\") "
	Sep 27 00:46:28 addons-220192 kubelet[1511]: I0927 00:46:28.944200    1511 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44a3013c-bbfc-4d08-9ed4-a5160422cdf0-kube-api-access-4hp9q" (OuterVolumeSpecName: "kube-api-access-4hp9q") pod "44a3013c-bbfc-4d08-9ed4-a5160422cdf0" (UID: "44a3013c-bbfc-4d08-9ed4-a5160422cdf0"). InnerVolumeSpecName "kube-api-access-4hp9q". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 27 00:46:29 addons-220192 kubelet[1511]: I0927 00:46:29.042426    1511 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-4hp9q\" (UniqueName: \"kubernetes.io/projected/44a3013c-bbfc-4d08-9ed4-a5160422cdf0-kube-api-access-4hp9q\") on node \"addons-220192\" DevicePath \"\""
	Sep 27 00:46:29 addons-220192 kubelet[1511]: I0927 00:46:29.074403    1511 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69485328-b5c9-4fa1-9385-f065e8dc91b6" path="/var/lib/kubelet/pods/69485328-b5c9-4fa1-9385-f065e8dc91b6/volumes"
	Sep 27 00:46:29 addons-220192 kubelet[1511]: I0927 00:46:29.146532    1511 scope.go:117] "RemoveContainer" containerID="0d8bf8406410fb567ff64378db451088c62c610dad2ccfe4a6d6a0162a922476"
	Sep 27 00:46:29 addons-220192 kubelet[1511]: E0927 00:46:29.347000    1511 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397989346662255,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:507654,},InodesUsed:&UInt64Value{Value:195,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:46:29 addons-220192 kubelet[1511]: E0927 00:46:29.347035    1511 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727397989346662255,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:507654,},InodesUsed:&UInt64Value{Value:195,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [75b98e47380efba40cfb3e8a5003cf4e028dcd407cc6a050e8ed0e60a3c3168e] <==
	I0927 00:35:20.141906       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0927 00:35:20.155589       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0927 00:35:20.158853       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0927 00:35:20.168600       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0927 00:35:20.168906       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-220192_3340d466-8fff-465f-820a-19104d1219e9!
	I0927 00:35:20.169972       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"88317798-314a-4def-996f-d4666fa1d4d1", APIVersion:"v1", ResourceVersion:"910", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-220192_3340d466-8fff-465f-820a-19104d1219e9 became leader
	I0927 00:35:20.269123       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-220192_3340d466-8fff-465f-820a-19104d1219e9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-220192 -n addons-220192
helpers_test.go:261: (dbg) Run:  kubectl --context addons-220192 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-cp22f ingress-nginx-admission-patch-rbwjb
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-220192 describe pod busybox ingress-nginx-admission-create-cp22f ingress-nginx-admission-patch-rbwjb
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-220192 describe pod busybox ingress-nginx-admission-create-cp22f ingress-nginx-admission-patch-rbwjb: exit status 1 (95.60556ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-220192/192.168.49.2
	Start Time:       Fri, 27 Sep 2024 00:37:14 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.21
	IPs:
	  IP:  10.244.0.21
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lzqg5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lzqg5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m17s                  default-scheduler  Successfully assigned default/busybox to addons-220192
	  Normal   Pulling    7m56s (x4 over 9m17s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m56s (x4 over 9m17s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     7m56s (x4 over 9m17s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m31s (x6 over 9m16s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m7s (x21 over 9m16s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-cp22f" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-rbwjb" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-220192 describe pod busybox ingress-nginx-admission-create-cp22f ingress-nginx-admission-patch-rbwjb: exit status 1
--- FAIL: TestAddons/parallel/Registry (73.78s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (152.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-220192 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-220192 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-220192 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [f647afe5-a1cb-42de-9bb5-86cc1b983514] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [f647afe5-a1cb-42de-9bb5-86cc1b983514] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003486975s
I0927 00:46:50.782644  559158 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p addons-220192 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:260: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-220192 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.982770752s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:276: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:284: (dbg) Run:  kubectl --context addons-220192 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p addons-220192 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p addons-220192 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-arm64 -p addons-220192 addons disable ingress-dns --alsologtostderr -v=1: (1.549266694s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-arm64 -p addons-220192 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-arm64 -p addons-220192 addons disable ingress --alsologtostderr -v=1: (7.737475171s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-220192
helpers_test.go:235: (dbg) docker inspect addons-220192:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d422e214370b2c42e3f8fefdb034ec6a32b66ac61da65610a7675682c1d93c9b",
	        "Created": "2024-09-27T00:34:02.077711994Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 560408,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-27T00:34:02.205411751Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:62002f6a97ad1f6cd4117c29b1c488a6bf3b6255c8231f0d600b1bc7ba1bcfd6",
	        "ResolvConfPath": "/var/lib/docker/containers/d422e214370b2c42e3f8fefdb034ec6a32b66ac61da65610a7675682c1d93c9b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d422e214370b2c42e3f8fefdb034ec6a32b66ac61da65610a7675682c1d93c9b/hostname",
	        "HostsPath": "/var/lib/docker/containers/d422e214370b2c42e3f8fefdb034ec6a32b66ac61da65610a7675682c1d93c9b/hosts",
	        "LogPath": "/var/lib/docker/containers/d422e214370b2c42e3f8fefdb034ec6a32b66ac61da65610a7675682c1d93c9b/d422e214370b2c42e3f8fefdb034ec6a32b66ac61da65610a7675682c1d93c9b-json.log",
	        "Name": "/addons-220192",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-220192:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-220192",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0793fd05507618b00e1cf7c9b3149e5680c33ad6255fa927fc31c2a001bb624a-init/diff:/var/lib/docker/overlay2/e55adca0cb8a4469e5ee8e2f29139ff0ae0fed3b714ff629d2562144f224236f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0793fd05507618b00e1cf7c9b3149e5680c33ad6255fa927fc31c2a001bb624a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0793fd05507618b00e1cf7c9b3149e5680c33ad6255fa927fc31c2a001bb624a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0793fd05507618b00e1cf7c9b3149e5680c33ad6255fa927fc31c2a001bb624a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-220192",
	                "Source": "/var/lib/docker/volumes/addons-220192/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-220192",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-220192",
	                "name.minikube.sigs.k8s.io": "addons-220192",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eb69f56da587fa8de40f3ac5f3f88f4566733f9673b58beb1d3e2d5b04e449e4",
	            "SandboxKey": "/var/run/docker/netns/eb69f56da587",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33501"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33502"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33505"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33503"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33504"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-220192": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "17b152e28b32de3f994213bf60b3fa21cfee26682153643fc3b71f12f405c393",
	                    "EndpointID": "8d6fe335b06a81d7595798770e72c7f67d0e3bb540d515a162969aad9ac12807",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-220192",
	                        "d422e214370b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-220192 -n addons-220192
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-220192 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-220192 logs -n 25: (1.423925604s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 27 Sep 24 00:33 UTC | 27 Sep 24 00:33 UTC |
	| delete  | -p download-only-005398              | download-only-005398   | jenkins | v1.34.0 | 27 Sep 24 00:33 UTC | 27 Sep 24 00:33 UTC |
	| start   | -o=json --download-only              | download-only-763965   | jenkins | v1.34.0 | 27 Sep 24 00:33 UTC |                     |
	|         | -p download-only-763965              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 27 Sep 24 00:33 UTC | 27 Sep 24 00:33 UTC |
	| delete  | -p download-only-763965              | download-only-763965   | jenkins | v1.34.0 | 27 Sep 24 00:33 UTC | 27 Sep 24 00:33 UTC |
	| delete  | -p download-only-005398              | download-only-005398   | jenkins | v1.34.0 | 27 Sep 24 00:33 UTC | 27 Sep 24 00:33 UTC |
	| delete  | -p download-only-763965              | download-only-763965   | jenkins | v1.34.0 | 27 Sep 24 00:33 UTC | 27 Sep 24 00:33 UTC |
	| start   | --download-only -p                   | download-docker-575684 | jenkins | v1.34.0 | 27 Sep 24 00:33 UTC |                     |
	|         | download-docker-575684               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p download-docker-575684            | download-docker-575684 | jenkins | v1.34.0 | 27 Sep 24 00:33 UTC | 27 Sep 24 00:33 UTC |
	| start   | --download-only -p                   | binary-mirror-878606   | jenkins | v1.34.0 | 27 Sep 24 00:33 UTC |                     |
	|         | binary-mirror-878606                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:39419               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-878606              | binary-mirror-878606   | jenkins | v1.34.0 | 27 Sep 24 00:33 UTC | 27 Sep 24 00:33 UTC |
	| addons  | disable dashboard -p                 | addons-220192          | jenkins | v1.34.0 | 27 Sep 24 00:33 UTC |                     |
	|         | addons-220192                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-220192          | jenkins | v1.34.0 | 27 Sep 24 00:33 UTC |                     |
	|         | addons-220192                        |                        |         |         |                     |                     |
	| start   | -p addons-220192 --wait=true         | addons-220192          | jenkins | v1.34.0 | 27 Sep 24 00:33 UTC | 27 Sep 24 00:37 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-220192          | jenkins | v1.34.0 | 27 Sep 24 00:45 UTC | 27 Sep 24 00:45 UTC |
	|         | -p addons-220192                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-220192 addons disable         | addons-220192          | jenkins | v1.34.0 | 27 Sep 24 00:45 UTC | 27 Sep 24 00:45 UTC |
	|         | headlamp --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-220192 addons                 | addons-220192          | jenkins | v1.34.0 | 27 Sep 24 00:46 UTC | 27 Sep 24 00:46 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-220192 addons                 | addons-220192          | jenkins | v1.34.0 | 27 Sep 24 00:46 UTC | 27 Sep 24 00:46 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| ip      | addons-220192 ip                     | addons-220192          | jenkins | v1.34.0 | 27 Sep 24 00:46 UTC | 27 Sep 24 00:46 UTC |
	| addons  | addons-220192 addons disable         | addons-220192          | jenkins | v1.34.0 | 27 Sep 24 00:46 UTC | 27 Sep 24 00:46 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-220192          | jenkins | v1.34.0 | 27 Sep 24 00:46 UTC | 27 Sep 24 00:46 UTC |
	|         | addons-220192                        |                        |         |         |                     |                     |
	| ssh     | addons-220192 ssh curl -s            | addons-220192          | jenkins | v1.34.0 | 27 Sep 24 00:46 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:          |                        |         |         |                     |                     |
	|         | nginx.example.com'                   |                        |         |         |                     |                     |
	| ip      | addons-220192 ip                     | addons-220192          | jenkins | v1.34.0 | 27 Sep 24 00:49 UTC | 27 Sep 24 00:49 UTC |
	| addons  | addons-220192 addons disable         | addons-220192          | jenkins | v1.34.0 | 27 Sep 24 00:49 UTC | 27 Sep 24 00:49 UTC |
	|         | ingress-dns --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-220192 addons disable         | addons-220192          | jenkins | v1.34.0 | 27 Sep 24 00:49 UTC | 27 Sep 24 00:49 UTC |
	|         | ingress --alsologtostderr -v=1       |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 00:33:38
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 00:33:38.065367  559927 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:33:38.065662  559927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:33:38.065684  559927 out.go:358] Setting ErrFile to fd 2...
	I0927 00:33:38.065691  559927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:33:38.066134  559927 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-553751/.minikube/bin
	I0927 00:33:38.067015  559927 out.go:352] Setting JSON to false
	I0927 00:33:38.067932  559927 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":15361,"bootTime":1727381857,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0927 00:33:38.068011  559927 start.go:139] virtualization:  
	I0927 00:33:38.070248  559927 out.go:177] * [addons-220192] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0927 00:33:38.071946  559927 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 00:33:38.071998  559927 notify.go:220] Checking for updates...
	I0927 00:33:38.075858  559927 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 00:33:38.077758  559927 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-553751/kubeconfig
	I0927 00:33:38.079450  559927 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-553751/.minikube
	I0927 00:33:38.081273  559927 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0927 00:33:38.082746  559927 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 00:33:38.084258  559927 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 00:33:38.110806  559927 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0927 00:33:38.110932  559927 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 00:33:38.175583  559927 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-27 00:33:38.165974566 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 00:33:38.175704  559927 docker.go:318] overlay module found
	I0927 00:33:38.178529  559927 out.go:177] * Using the docker driver based on user configuration
	I0927 00:33:38.179548  559927 start.go:297] selected driver: docker
	I0927 00:33:38.179564  559927 start.go:901] validating driver "docker" against <nil>
	I0927 00:33:38.179577  559927 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 00:33:38.180219  559927 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 00:33:38.238992  559927 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-27 00:33:38.229229626 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 00:33:38.239202  559927 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 00:33:38.239427  559927 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 00:33:38.240920  559927 out.go:177] * Using Docker driver with root privileges
	I0927 00:33:38.242287  559927 cni.go:84] Creating CNI manager for ""
	I0927 00:33:38.242357  559927 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0927 00:33:38.242365  559927 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0927 00:33:38.242444  559927 start.go:340] cluster config:
	{Name:addons-220192 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-220192 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:33:38.244624  559927 out.go:177] * Starting "addons-220192" primary control-plane node in "addons-220192" cluster
	I0927 00:33:38.245946  559927 cache.go:121] Beginning downloading kic base image for docker with crio
	I0927 00:33:38.247419  559927 out.go:177] * Pulling base image v0.0.45-1727108449-19696 ...
	I0927 00:33:38.248793  559927 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 00:33:38.248850  559927 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19711-553751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I0927 00:33:38.248878  559927 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0927 00:33:38.248883  559927 cache.go:56] Caching tarball of preloaded images
	I0927 00:33:38.248983  559927 preload.go:172] Found /home/jenkins/minikube-integration/19711-553751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0927 00:33:38.248995  559927 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 00:33:38.249334  559927 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/config.json ...
	I0927 00:33:38.249364  559927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/config.json: {Name:mkb4ce982f7db05f161e177b73decd3cb5d108a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:33:38.262886  559927 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0927 00:33:38.263010  559927 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory
	I0927 00:33:38.263042  559927 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory, skipping pull
	I0927 00:33:38.263053  559927 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 exists in cache, skipping pull
	I0927 00:33:38.263061  559927 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 as a tarball
	I0927 00:33:38.263070  559927 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 from local cache
	I0927 00:33:55.153743  559927 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 from cached tarball
	I0927 00:33:55.153786  559927 cache.go:194] Successfully downloaded all kic artifacts
	I0927 00:33:55.153817  559927 start.go:360] acquireMachinesLock for addons-220192: {Name:mk630666e0be44a920ddd2e3008b4121da78b597 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 00:33:55.153958  559927 start.go:364] duration metric: took 117.166µs to acquireMachinesLock for "addons-220192"
	I0927 00:33:55.153999  559927 start.go:93] Provisioning new machine with config: &{Name:addons-220192 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-220192 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 00:33:55.154087  559927 start.go:125] createHost starting for "" (driver="docker")
	I0927 00:33:55.156404  559927 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0927 00:33:55.156691  559927 start.go:159] libmachine.API.Create for "addons-220192" (driver="docker")
	I0927 00:33:55.156728  559927 client.go:168] LocalClient.Create starting
	I0927 00:33:55.156866  559927 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19711-553751/.minikube/certs/ca.pem
	I0927 00:33:55.366096  559927 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19711-553751/.minikube/certs/cert.pem
	I0927 00:33:55.869561  559927 cli_runner.go:164] Run: docker network inspect addons-220192 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0927 00:33:55.885619  559927 cli_runner.go:211] docker network inspect addons-220192 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0927 00:33:55.885722  559927 network_create.go:284] running [docker network inspect addons-220192] to gather additional debugging logs...
	I0927 00:33:55.885746  559927 cli_runner.go:164] Run: docker network inspect addons-220192
	W0927 00:33:55.900334  559927 cli_runner.go:211] docker network inspect addons-220192 returned with exit code 1
	I0927 00:33:55.900373  559927 network_create.go:287] error running [docker network inspect addons-220192]: docker network inspect addons-220192: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-220192 not found
	I0927 00:33:55.900388  559927 network_create.go:289] output of [docker network inspect addons-220192]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-220192 not found
	
	** /stderr **
	I0927 00:33:55.900485  559927 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0927 00:33:55.915597  559927 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001bf5250}
	I0927 00:33:55.915643  559927 network_create.go:124] attempt to create docker network addons-220192 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0927 00:33:55.915701  559927 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-220192 addons-220192
	I0927 00:33:55.980148  559927 network_create.go:108] docker network addons-220192 192.168.49.0/24 created
	I0927 00:33:55.980183  559927 kic.go:121] calculated static IP "192.168.49.2" for the "addons-220192" container
	I0927 00:33:55.980255  559927 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0927 00:33:55.992949  559927 cli_runner.go:164] Run: docker volume create addons-220192 --label name.minikube.sigs.k8s.io=addons-220192 --label created_by.minikube.sigs.k8s.io=true
	I0927 00:33:56.009754  559927 oci.go:103] Successfully created a docker volume addons-220192
	I0927 00:33:56.009852  559927 cli_runner.go:164] Run: docker run --rm --name addons-220192-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-220192 --entrypoint /usr/bin/test -v addons-220192:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -d /var/lib
	I0927 00:33:57.993052  559927 cli_runner.go:217] Completed: docker run --rm --name addons-220192-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-220192 --entrypoint /usr/bin/test -v addons-220192:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -d /var/lib: (1.983158106s)
	I0927 00:33:57.993080  559927 oci.go:107] Successfully prepared a docker volume addons-220192
	I0927 00:33:57.993109  559927 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 00:33:57.993128  559927 kic.go:194] Starting extracting preloaded images to volume ...
	I0927 00:33:57.993194  559927 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19711-553751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-220192:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -I lz4 -xf /preloaded.tar -C /extractDir
	I0927 00:34:02.014141  559927 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19711-553751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-220192:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -I lz4 -xf /preloaded.tar -C /extractDir: (4.020882938s)
	I0927 00:34:02.014176  559927 kic.go:203] duration metric: took 4.021043549s to extract preloaded images to volume ...
	W0927 00:34:02.014327  559927 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0927 00:34:02.014451  559927 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0927 00:34:02.064494  559927 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-220192 --name addons-220192 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-220192 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-220192 --network addons-220192 --ip 192.168.49.2 --volume addons-220192:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21
	I0927 00:34:02.388520  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Running}}
	I0927 00:34:02.409325  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:02.431602  559927 cli_runner.go:164] Run: docker exec addons-220192 stat /var/lib/dpkg/alternatives/iptables
	I0927 00:34:02.480602  559927 oci.go:144] the created container "addons-220192" has a running status.
	I0927 00:34:02.480633  559927 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa...
	I0927 00:34:03.617795  559927 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0927 00:34:03.637260  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:03.653027  559927 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0927 00:34:03.653052  559927 kic_runner.go:114] Args: [docker exec --privileged addons-220192 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0927 00:34:03.700155  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:03.717668  559927 machine.go:93] provisionDockerMachine start ...
	I0927 00:34:03.717764  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:03.733546  559927 main.go:141] libmachine: Using SSH client type: native
	I0927 00:34:03.733814  559927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33501 <nil> <nil>}
	I0927 00:34:03.733823  559927 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 00:34:03.862293  559927 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-220192
	
	I0927 00:34:03.862317  559927 ubuntu.go:169] provisioning hostname "addons-220192"
	I0927 00:34:03.862386  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:03.879096  559927 main.go:141] libmachine: Using SSH client type: native
	I0927 00:34:03.879355  559927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33501 <nil> <nil>}
	I0927 00:34:03.879374  559927 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-220192 && echo "addons-220192" | sudo tee /etc/hostname
	I0927 00:34:04.019276  559927 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-220192
	
	I0927 00:34:04.019405  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:04.036545  559927 main.go:141] libmachine: Using SSH client type: native
	I0927 00:34:04.036798  559927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33501 <nil> <nil>}
	I0927 00:34:04.036821  559927 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-220192' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-220192/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-220192' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 00:34:04.162591  559927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 00:34:04.162681  559927 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19711-553751/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-553751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-553751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-553751/.minikube}
	I0927 00:34:04.162739  559927 ubuntu.go:177] setting up certificates
	I0927 00:34:04.162769  559927 provision.go:84] configureAuth start
	I0927 00:34:04.162865  559927 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-220192
	I0927 00:34:04.179414  559927 provision.go:143] copyHostCerts
	I0927 00:34:04.179501  559927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-553751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-553751/.minikube/ca.pem (1078 bytes)
	I0927 00:34:04.179628  559927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-553751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-553751/.minikube/cert.pem (1123 bytes)
	I0927 00:34:04.179689  559927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-553751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-553751/.minikube/key.pem (1675 bytes)
	I0927 00:34:04.179747  559927 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-553751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-553751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-553751/.minikube/certs/ca-key.pem org=jenkins.addons-220192 san=[127.0.0.1 192.168.49.2 addons-220192 localhost minikube]
	I0927 00:34:04.940382  559927 provision.go:177] copyRemoteCerts
	I0927 00:34:04.940458  559927 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 00:34:04.940508  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:04.963981  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:05.060102  559927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 00:34:05.084207  559927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 00:34:05.107968  559927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0927 00:34:05.131460  559927 provision.go:87] duration metric: took 968.661896ms to configureAuth
	I0927 00:34:05.131489  559927 ubuntu.go:193] setting minikube options for container-runtime
	I0927 00:34:05.131682  559927 config.go:182] Loaded profile config "addons-220192": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:34:05.131795  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:05.148107  559927 main.go:141] libmachine: Using SSH client type: native
	I0927 00:34:05.148363  559927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33501 <nil> <nil>}
	I0927 00:34:05.148380  559927 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 00:34:05.367545  559927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 00:34:05.367569  559927 machine.go:96] duration metric: took 1.649879839s to provisionDockerMachine
	I0927 00:34:05.367581  559927 client.go:171] duration metric: took 10.210842557s to LocalClient.Create
	I0927 00:34:05.367593  559927 start.go:167] duration metric: took 10.210902338s to libmachine.API.Create "addons-220192"
	I0927 00:34:05.367601  559927 start.go:293] postStartSetup for "addons-220192" (driver="docker")
	I0927 00:34:05.367612  559927 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 00:34:05.367677  559927 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 00:34:05.367727  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:05.385055  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:05.479714  559927 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 00:34:05.483003  559927 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0927 00:34:05.483039  559927 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0927 00:34:05.483050  559927 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0927 00:34:05.483057  559927 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0927 00:34:05.483067  559927 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-553751/.minikube/addons for local assets ...
	I0927 00:34:05.483137  559927 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-553751/.minikube/files for local assets ...
	I0927 00:34:05.483165  559927 start.go:296] duration metric: took 115.558426ms for postStartSetup
	I0927 00:34:05.483490  559927 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-220192
	I0927 00:34:05.499440  559927 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/config.json ...
	I0927 00:34:05.499737  559927 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 00:34:05.499789  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:05.515159  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:05.603311  559927 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0927 00:34:05.607625  559927 start.go:128] duration metric: took 10.453518321s to createHost
	I0927 00:34:05.607654  559927 start.go:83] releasing machines lock for "addons-220192", held for 10.453681394s
	I0927 00:34:05.607730  559927 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-220192
	I0927 00:34:05.623821  559927 ssh_runner.go:195] Run: cat /version.json
	I0927 00:34:05.623878  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:05.623938  559927 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 00:34:05.624015  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:05.641153  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:05.648618  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:05.857953  559927 ssh_runner.go:195] Run: systemctl --version
	I0927 00:34:05.862287  559927 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 00:34:06.008454  559927 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0927 00:34:06.013211  559927 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 00:34:06.035213  559927 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0927 00:34:06.035367  559927 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 00:34:06.065128  559927 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0927 00:34:06.065196  559927 start.go:495] detecting cgroup driver to use...
	I0927 00:34:06.065243  559927 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0927 00:34:06.065323  559927 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 00:34:06.081824  559927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 00:34:06.093535  559927 docker.go:217] disabling cri-docker service (if available) ...
	I0927 00:34:06.093645  559927 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 00:34:06.108200  559927 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 00:34:06.123249  559927 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 00:34:06.207618  559927 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 00:34:06.299470  559927 docker.go:233] disabling docker service ...
	I0927 00:34:06.299551  559927 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 00:34:06.320068  559927 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 00:34:06.331991  559927 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 00:34:06.415970  559927 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 00:34:06.517135  559927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 00:34:06.528773  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 00:34:06.545373  559927 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 00:34:06.545478  559927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:34:06.555271  559927 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 00:34:06.555361  559927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:34:06.565035  559927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:34:06.574675  559927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:34:06.584230  559927 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 00:34:06.593099  559927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:34:06.602922  559927 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:34:06.618358  559927 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:34:06.628225  559927 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 00:34:06.636420  559927 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 00:34:06.644684  559927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:34:06.724669  559927 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 00:34:06.839759  559927 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 00:34:06.839877  559927 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 00:34:06.843772  559927 start.go:563] Will wait 60s for crictl version
	I0927 00:34:06.843909  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:34:06.847728  559927 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 00:34:06.886811  559927 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0927 00:34:06.886963  559927 ssh_runner.go:195] Run: crio --version
	I0927 00:34:06.923924  559927 ssh_runner.go:195] Run: crio --version
	I0927 00:34:06.961630  559927 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0927 00:34:06.964039  559927 cli_runner.go:164] Run: docker network inspect addons-220192 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0927 00:34:06.979344  559927 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0927 00:34:06.982885  559927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 00:34:06.993886  559927 kubeadm.go:883] updating cluster {Name:addons-220192 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-220192 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 00:34:06.994013  559927 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 00:34:06.994079  559927 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 00:34:07.065666  559927 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 00:34:07.065693  559927 crio.go:433] Images already preloaded, skipping extraction
	I0927 00:34:07.065759  559927 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 00:34:07.103089  559927 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 00:34:07.103111  559927 cache_images.go:84] Images are preloaded, skipping loading
	I0927 00:34:07.103119  559927 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0927 00:34:07.103212  559927 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-220192 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-220192 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 00:34:07.103294  559927 ssh_runner.go:195] Run: crio config
	I0927 00:34:07.184942  559927 cni.go:84] Creating CNI manager for ""
	I0927 00:34:07.185003  559927 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0927 00:34:07.185030  559927 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 00:34:07.185073  559927 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-220192 NodeName:addons-220192 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 00:34:07.185246  559927 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-220192"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 00:34:07.185338  559927 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 00:34:07.193935  559927 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 00:34:07.194048  559927 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 00:34:07.202460  559927 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0927 00:34:07.219678  559927 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 00:34:07.237053  559927 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0927 00:34:07.254481  559927 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0927 00:34:07.257688  559927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 00:34:07.268344  559927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:34:07.360228  559927 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 00:34:07.373741  559927 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192 for IP: 192.168.49.2
	I0927 00:34:07.373817  559927 certs.go:194] generating shared ca certs ...
	I0927 00:34:07.373850  559927 certs.go:226] acquiring lock for ca certs: {Name:mkd73b356b28d0818fea73c44481b0cb2597afbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:34:07.374052  559927 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-553751/.minikube/ca.key
	I0927 00:34:07.720680  559927 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-553751/.minikube/ca.crt ...
	I0927 00:34:07.720716  559927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-553751/.minikube/ca.crt: {Name:mkbfcd9c6c45e82aff1171fec506aac41dc5280a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:34:07.720931  559927 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-553751/.minikube/ca.key ...
	I0927 00:34:07.720946  559927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-553751/.minikube/ca.key: {Name:mk27b9aca1fe71da4c843dcf3c985bda93669b0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:34:07.721037  559927 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-553751/.minikube/proxy-client-ca.key
	I0927 00:34:09.101274  559927 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-553751/.minikube/proxy-client-ca.crt ...
	I0927 00:34:09.101305  559927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-553751/.minikube/proxy-client-ca.crt: {Name:mkdc0759b42a37859fc6068ba22254e0927be300 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:34:09.101947  559927 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-553751/.minikube/proxy-client-ca.key ...
	I0927 00:34:09.101964  559927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-553751/.minikube/proxy-client-ca.key: {Name:mke7b97bcbcb62de5f7a0ca1a1958a806a1e0ac9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:34:09.102051  559927 certs.go:256] generating profile certs ...
	I0927 00:34:09.102113  559927 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.key
	I0927 00:34:09.102130  559927 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.crt with IP's: []
	I0927 00:34:09.315290  559927 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.crt ...
	I0927 00:34:09.315324  559927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.crt: {Name:mkfff86d6c11512911cf0969854882c551536630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:34:09.315544  559927 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.key ...
	I0927 00:34:09.315558  559927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.key: {Name:mk1634c2995d45b5e8b115cffc851a552ceefda4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:34:09.315645  559927 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/apiserver.key.bb9babc9
	I0927 00:34:09.315665  559927 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/apiserver.crt.bb9babc9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0927 00:34:09.625710  559927 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/apiserver.crt.bb9babc9 ...
	I0927 00:34:09.625740  559927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/apiserver.crt.bb9babc9: {Name:mk7150966e38d5953f0ffbbca37251c426945939 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:34:09.625923  559927 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/apiserver.key.bb9babc9 ...
	I0927 00:34:09.625936  559927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/apiserver.key.bb9babc9: {Name:mk05d3eba820733b8f36b06f33f5470f331f3307 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:34:09.626021  559927 certs.go:381] copying /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/apiserver.crt.bb9babc9 -> /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/apiserver.crt
	I0927 00:34:09.626100  559927 certs.go:385] copying /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/apiserver.key.bb9babc9 -> /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/apiserver.key
	I0927 00:34:09.626154  559927 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/proxy-client.key
	I0927 00:34:09.626175  559927 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/proxy-client.crt with IP's: []
	I0927 00:34:10.552918  559927 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/proxy-client.crt ...
	I0927 00:34:10.552956  559927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/proxy-client.crt: {Name:mkf5cd4cf9e9eaebbd419908d7e57768395a038f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:34:10.553141  559927 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/proxy-client.key ...
	I0927 00:34:10.553160  559927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/proxy-client.key: {Name:mk5fec058a0a902adcdcf9089d18b3d6355794eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:34:10.553344  559927 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-553751/.minikube/certs/ca-key.pem (1679 bytes)
	I0927 00:34:10.553391  559927 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-553751/.minikube/certs/ca.pem (1078 bytes)
	I0927 00:34:10.553423  559927 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-553751/.minikube/certs/cert.pem (1123 bytes)
	I0927 00:34:10.553451  559927 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-553751/.minikube/certs/key.pem (1675 bytes)
	I0927 00:34:10.554112  559927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 00:34:10.580588  559927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0927 00:34:10.603802  559927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 00:34:10.628713  559927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0927 00:34:10.653540  559927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0927 00:34:10.677124  559927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0927 00:34:10.701503  559927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 00:34:10.724622  559927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 00:34:10.748189  559927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 00:34:10.772084  559927 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 00:34:10.789400  559927 ssh_runner.go:195] Run: openssl version
	I0927 00:34:10.794925  559927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 00:34:10.804621  559927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:34:10.808078  559927 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:34 /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:34:10.808143  559927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:34:10.814650  559927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 00:34:10.823722  559927 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 00:34:10.826819  559927 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0927 00:34:10.826870  559927 kubeadm.go:392] StartCluster: {Name:addons-220192 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-220192 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:34:10.826950  559927 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 00:34:10.827020  559927 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 00:34:10.866663  559927 cri.go:89] found id: ""
	I0927 00:34:10.866760  559927 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 00:34:10.875415  559927 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 00:34:10.883762  559927 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0927 00:34:10.883827  559927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 00:34:10.893704  559927 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 00:34:10.893724  559927 kubeadm.go:157] found existing configuration files:
	
	I0927 00:34:10.893774  559927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 00:34:10.902339  559927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 00:34:10.902423  559927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 00:34:10.910637  559927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 00:34:10.919187  559927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 00:34:10.919251  559927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 00:34:10.927057  559927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 00:34:10.935278  559927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 00:34:10.935346  559927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 00:34:10.943456  559927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 00:34:10.951694  559927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 00:34:10.951762  559927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 00:34:10.959916  559927 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0927 00:34:10.995459  559927 kubeadm.go:310] W0927 00:34:10.994701    1188 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 00:34:10.996690  559927 kubeadm.go:310] W0927 00:34:10.996201    1188 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 00:34:11.020983  559927 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0927 00:34:11.080895  559927 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 00:34:29.763728  559927 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0927 00:34:29.763788  559927 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 00:34:29.763877  559927 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0927 00:34:29.763937  559927 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0927 00:34:29.764020  559927 kubeadm.go:310] OS: Linux
	I0927 00:34:29.764081  559927 kubeadm.go:310] CGROUPS_CPU: enabled
	I0927 00:34:29.764137  559927 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0927 00:34:29.764217  559927 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0927 00:34:29.764274  559927 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0927 00:34:29.764324  559927 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0927 00:34:29.764406  559927 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0927 00:34:29.764467  559927 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0927 00:34:29.764528  559927 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0927 00:34:29.764588  559927 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0927 00:34:29.764661  559927 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 00:34:29.764772  559927 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 00:34:29.764867  559927 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0927 00:34:29.764931  559927 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 00:34:29.766962  559927 out.go:235]   - Generating certificates and keys ...
	I0927 00:34:29.767068  559927 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 00:34:29.767153  559927 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 00:34:29.767232  559927 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0927 00:34:29.767300  559927 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0927 00:34:29.767387  559927 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0927 00:34:29.767453  559927 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0927 00:34:29.767527  559927 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0927 00:34:29.767659  559927 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-220192 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0927 00:34:29.767722  559927 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0927 00:34:29.767855  559927 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-220192 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0927 00:34:29.767928  559927 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0927 00:34:29.768001  559927 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0927 00:34:29.768051  559927 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0927 00:34:29.768131  559927 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 00:34:29.768206  559927 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 00:34:29.768283  559927 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0927 00:34:29.768353  559927 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 00:34:29.768436  559927 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 00:34:29.768511  559927 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 00:34:29.768606  559927 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 00:34:29.768699  559927 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 00:34:29.769783  559927 out.go:235]   - Booting up control plane ...
	I0927 00:34:29.769896  559927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 00:34:29.769989  559927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 00:34:29.770065  559927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 00:34:29.770172  559927 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 00:34:29.770279  559927 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 00:34:29.770329  559927 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 00:34:29.770469  559927 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0927 00:34:29.770575  559927 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0927 00:34:29.770637  559927 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.50140432s
	I0927 00:34:29.770724  559927 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0927 00:34:29.770784  559927 kubeadm.go:310] [api-check] The API server is healthy after 6.001791706s
	I0927 00:34:29.770893  559927 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0927 00:34:29.771024  559927 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0927 00:34:29.771086  559927 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0927 00:34:29.771270  559927 kubeadm.go:310] [mark-control-plane] Marking the node addons-220192 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0927 00:34:29.771331  559927 kubeadm.go:310] [bootstrap-token] Using token: 9ix9q6.4kz2sbtsprzpkswr
	I0927 00:34:29.773367  559927 out.go:235]   - Configuring RBAC rules ...
	I0927 00:34:29.773551  559927 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0927 00:34:29.773700  559927 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0927 00:34:29.773871  559927 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0927 00:34:29.774024  559927 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0927 00:34:29.774161  559927 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0927 00:34:29.774292  559927 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0927 00:34:29.774445  559927 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0927 00:34:29.774498  559927 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0927 00:34:29.774551  559927 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0927 00:34:29.774558  559927 kubeadm.go:310] 
	I0927 00:34:29.774618  559927 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0927 00:34:29.774626  559927 kubeadm.go:310] 
	I0927 00:34:29.774701  559927 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0927 00:34:29.774709  559927 kubeadm.go:310] 
	I0927 00:34:29.774754  559927 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0927 00:34:29.774813  559927 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0927 00:34:29.774870  559927 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0927 00:34:29.774879  559927 kubeadm.go:310] 
	I0927 00:34:29.774933  559927 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0927 00:34:29.774941  559927 kubeadm.go:310] 
	I0927 00:34:29.774988  559927 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0927 00:34:29.774996  559927 kubeadm.go:310] 
	I0927 00:34:29.775047  559927 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0927 00:34:29.775123  559927 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0927 00:34:29.775193  559927 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0927 00:34:29.775201  559927 kubeadm.go:310] 
	I0927 00:34:29.775284  559927 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0927 00:34:29.775362  559927 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0927 00:34:29.775370  559927 kubeadm.go:310] 
	I0927 00:34:29.775452  559927 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9ix9q6.4kz2sbtsprzpkswr \
	I0927 00:34:29.775556  559927 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d8dda315011cb74d53922a23f64d2f20e11a31a3286152848c02c6c9df47cdc \
	I0927 00:34:29.775579  559927 kubeadm.go:310] 	--control-plane 
	I0927 00:34:29.775584  559927 kubeadm.go:310] 
	I0927 00:34:29.775668  559927 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0927 00:34:29.775676  559927 kubeadm.go:310] 
	I0927 00:34:29.775757  559927 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9ix9q6.4kz2sbtsprzpkswr \
	I0927 00:34:29.775873  559927 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d8dda315011cb74d53922a23f64d2f20e11a31a3286152848c02c6c9df47cdc 
	I0927 00:34:29.775887  559927 cni.go:84] Creating CNI manager for ""
	I0927 00:34:29.775895  559927 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0927 00:34:29.778035  559927 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0927 00:34:29.779166  559927 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0927 00:34:29.783667  559927 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0927 00:34:29.783687  559927 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0927 00:34:29.802342  559927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0927 00:34:30.115884  559927 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 00:34:30.116099  559927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:34:30.116240  559927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-220192 minikube.k8s.io/updated_at=2024_09_27T00_34_30_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625 minikube.k8s.io/name=addons-220192 minikube.k8s.io/primary=true
	I0927 00:34:30.127679  559927 ops.go:34] apiserver oom_adj: -16
	I0927 00:34:30.288090  559927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:34:30.788920  559927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:34:31.288744  559927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:34:31.788793  559927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:34:32.288933  559927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:34:32.788947  559927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:34:33.288195  559927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:34:33.380134  559927 kubeadm.go:1113] duration metric: took 3.264113362s to wait for elevateKubeSystemPrivileges
	I0927 00:34:33.380167  559927 kubeadm.go:394] duration metric: took 22.553300472s to StartCluster
	I0927 00:34:33.380185  559927 settings.go:142] acquiring lock: {Name:mk5b1f005001018637d448709269193603885722 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:34:33.380304  559927 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19711-553751/kubeconfig
	I0927 00:34:33.380761  559927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-553751/kubeconfig: {Name:mkc30ade55bf966f83b95c0af3746bfadfd3f379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:34:33.380969  559927 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 00:34:33.381135  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0927 00:34:33.381376  559927 config.go:182] Loaded profile config "addons-220192": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:34:33.381415  559927 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0927 00:34:33.381499  559927 addons.go:69] Setting yakd=true in profile "addons-220192"
	I0927 00:34:33.381517  559927 addons.go:234] Setting addon yakd=true in "addons-220192"
	I0927 00:34:33.381542  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:33.382036  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.382470  559927 addons.go:69] Setting metrics-server=true in profile "addons-220192"
	I0927 00:34:33.382492  559927 addons.go:234] Setting addon metrics-server=true in "addons-220192"
	I0927 00:34:33.382517  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:33.382550  559927 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-220192"
	I0927 00:34:33.382568  559927 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-220192"
	I0927 00:34:33.382595  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:33.382967  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.383084  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.383406  559927 out.go:177] * Verifying Kubernetes components...
	I0927 00:34:33.388011  559927 addons.go:69] Setting registry=true in profile "addons-220192"
	I0927 00:34:33.388044  559927 addons.go:234] Setting addon registry=true in "addons-220192"
	I0927 00:34:33.388084  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:33.388540  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.388723  559927 addons.go:69] Setting cloud-spanner=true in profile "addons-220192"
	I0927 00:34:33.388755  559927 addons.go:234] Setting addon cloud-spanner=true in "addons-220192"
	I0927 00:34:33.388797  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:33.389200  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.392076  559927 addons.go:69] Setting storage-provisioner=true in profile "addons-220192"
	I0927 00:34:33.392108  559927 addons.go:234] Setting addon storage-provisioner=true in "addons-220192"
	I0927 00:34:33.392149  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:33.392954  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.395344  559927 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-220192"
	I0927 00:34:33.395417  559927 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-220192"
	I0927 00:34:33.395737  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.396386  559927 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-220192"
	I0927 00:34:33.396450  559927 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-220192"
	I0927 00:34:33.396481  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:33.396929  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.404208  559927 addons.go:69] Setting volcano=true in profile "addons-220192"
	I0927 00:34:33.404292  559927 addons.go:234] Setting addon volcano=true in "addons-220192"
	I0927 00:34:33.404344  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:33.404886  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.415902  559927 addons.go:69] Setting default-storageclass=true in profile "addons-220192"
	I0927 00:34:33.415938  559927 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-220192"
	I0927 00:34:33.416335  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.419241  559927 addons.go:69] Setting volumesnapshots=true in profile "addons-220192"
	I0927 00:34:33.419284  559927 addons.go:234] Setting addon volumesnapshots=true in "addons-220192"
	I0927 00:34:33.419325  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:33.419808  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.436466  559927 addons.go:69] Setting gcp-auth=true in profile "addons-220192"
	I0927 00:34:33.436505  559927 mustload.go:65] Loading cluster: addons-220192
	I0927 00:34:33.436716  559927 config.go:182] Loaded profile config "addons-220192": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:34:33.436976  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.439910  559927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:34:33.454508  559927 addons.go:69] Setting ingress=true in profile "addons-220192"
	I0927 00:34:33.454557  559927 addons.go:234] Setting addon ingress=true in "addons-220192"
	I0927 00:34:33.454603  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:33.455134  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.470431  559927 addons.go:69] Setting ingress-dns=true in profile "addons-220192"
	I0927 00:34:33.470469  559927 addons.go:234] Setting addon ingress-dns=true in "addons-220192"
	I0927 00:34:33.470522  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:33.471029  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.480467  559927 addons.go:69] Setting inspektor-gadget=true in profile "addons-220192"
	I0927 00:34:33.480560  559927 addons.go:234] Setting addon inspektor-gadget=true in "addons-220192"
	I0927 00:34:33.480643  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:33.481279  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.501566  559927 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0927 00:34:33.502172  559927 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0927 00:34:33.515339  559927 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0927 00:34:33.515409  559927 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0927 00:34:33.515513  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:33.533114  559927 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 00:34:33.511884  559927 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0927 00:34:33.512482  559927 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I0927 00:34:33.533606  559927 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0927 00:34:33.534258  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:33.539191  559927 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-220192"
	I0927 00:34:33.539240  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:33.539680  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.555238  559927 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 00:34:33.555260  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 00:34:33.555320  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:33.575338  559927 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0927 00:34:33.575507  559927 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0927 00:34:33.579330  559927 out.go:177]   - Using image docker.io/registry:2.8.3
	I0927 00:34:33.579968  559927 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0927 00:34:33.579984  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0927 00:34:33.580043  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:33.589869  559927 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0927 00:34:33.589937  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0927 00:34:33.590044  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:33.592413  559927 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0927 00:34:33.592687  559927 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0927 00:34:33.592703  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0927 00:34:33.592762  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:33.594005  559927 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0927 00:34:33.594022  559927 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0927 00:34:33.594072  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:33.594614  559927 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0927 00:34:33.597708  559927 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0927 00:34:33.599885  559927 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0927 00:34:33.601815  559927 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0927 00:34:33.603187  559927 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0927 00:34:33.604424  559927 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0927 00:34:33.606160  559927 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0927 00:34:33.608900  559927 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0927 00:34:33.612242  559927 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0927 00:34:33.612266  559927 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0927 00:34:33.612345  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	W0927 00:34:33.625523  559927 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0927 00:34:33.636029  559927 addons.go:234] Setting addon default-storageclass=true in "addons-220192"
	I0927 00:34:33.636070  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:33.636475  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.653660  559927 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0927 00:34:33.658697  559927 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0927 00:34:33.662778  559927 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0927 00:34:33.663023  559927 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0927 00:34:33.663038  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0927 00:34:33.663104  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:33.676402  559927 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0927 00:34:33.705602  559927 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0927 00:34:33.705630  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0927 00:34:33.705724  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:33.728629  559927 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0927 00:34:33.732158  559927 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0927 00:34:33.732181  559927 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0927 00:34:33.732260  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:33.761441  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:33.777582  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:33.779733  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:33.781803  559927 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0927 00:34:33.785371  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:33.796375  559927 out.go:177]   - Using image docker.io/busybox:stable
	I0927 00:34:33.796498  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:33.799961  559927 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0927 00:34:33.799986  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0927 00:34:33.800052  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:33.803725  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:33.805040  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:33.827419  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:33.827850  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:33.868201  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:33.878799  559927 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 00:34:33.878821  559927 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 00:34:33.878995  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:33.889070  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:33.894820  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	W0927 00:34:33.897254  559927 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0927 00:34:33.897281  559927 retry.go:31] will retry after 222.514368ms: ssh: handshake failed: EOF
	I0927 00:34:33.899204  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:33.924221  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:34.099923  559927 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0927 00:34:34.099950  559927 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0927 00:34:34.143807  559927 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0927 00:34:34.143833  559927 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0927 00:34:34.150094  559927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0927 00:34:34.152840  559927 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0927 00:34:34.152862  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0927 00:34:34.152949  559927 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0927 00:34:34.152971  559927 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0927 00:34:34.228010  559927 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0927 00:34:34.228039  559927 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0927 00:34:34.241657  559927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0927 00:34:34.253784  559927 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0927 00:34:34.253808  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0927 00:34:34.256601  559927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0927 00:34:34.268169  559927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 00:34:34.271096  559927 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0927 00:34:34.271119  559927 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0927 00:34:34.275626  559927 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0927 00:34:34.275648  559927 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0927 00:34:34.293383  559927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0927 00:34:34.300829  559927 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0927 00:34:34.300856  559927 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0927 00:34:34.322150  559927 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0927 00:34:34.322176  559927 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0927 00:34:34.344962  559927 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0927 00:34:34.344989  559927 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0927 00:34:34.369058  559927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 00:34:34.404038  559927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0927 00:34:34.425344  559927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0927 00:34:34.432017  559927 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0927 00:34:34.432041  559927 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0927 00:34:34.435286  559927 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0927 00:34:34.435320  559927 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0927 00:34:34.435999  559927 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0927 00:34:34.436016  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0927 00:34:34.474152  559927 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0927 00:34:34.474181  559927 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0927 00:34:34.511874  559927 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 00:34:34.511910  559927 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0927 00:34:34.590980  559927 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0927 00:34:34.591007  559927 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0927 00:34:34.594814  559927 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0927 00:34:34.594884  559927 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0927 00:34:34.609412  559927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0927 00:34:34.664262  559927 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0927 00:34:34.664331  559927 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0927 00:34:34.667546  559927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 00:34:34.720328  559927 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0927 00:34:34.720354  559927 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0927 00:34:34.789427  559927 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0927 00:34:34.789454  559927 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0927 00:34:34.797435  559927 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.357437454s)
	I0927 00:34:34.797514  559927 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 00:34:34.797580  559927 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.416422565s)
	I0927 00:34:34.797731  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0927 00:34:34.820770  559927 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 00:34:34.820801  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0927 00:34:34.864725  559927 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0927 00:34:34.864753  559927 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0927 00:34:34.933391  559927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 00:34:34.981553  559927 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0927 00:34:34.981582  559927 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0927 00:34:35.002650  559927 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0927 00:34:35.002677  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0927 00:34:35.126608  559927 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0927 00:34:35.126635  559927 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0927 00:34:35.143210  559927 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0927 00:34:35.143238  559927 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0927 00:34:35.205388  559927 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0927 00:34:35.205414  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0927 00:34:35.215693  559927 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0927 00:34:35.215723  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0927 00:34:35.251131  559927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0927 00:34:35.275630  559927 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0927 00:34:35.275666  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0927 00:34:35.367653  559927 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0927 00:34:35.367680  559927 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0927 00:34:35.496151  559927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0927 00:34:37.834979  559927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.684849473s)
	I0927 00:34:39.467821  559927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.226126912s)
	I0927 00:34:39.467861  559927 addons.go:475] Verifying addon ingress=true in "addons-220192"
	I0927 00:34:39.468074  559927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.211440688s)
	I0927 00:34:39.468139  559927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.199948358s)
	I0927 00:34:39.468192  559927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.174786518s)
	I0927 00:34:39.468376  559927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.099295453s)
	I0927 00:34:39.468473  559927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.064403818s)
	I0927 00:34:39.468511  559927 addons.go:475] Verifying addon registry=true in "addons-220192"
	I0927 00:34:39.468878  559927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.04350358s)
	I0927 00:34:39.468943  559927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.859503354s)
	I0927 00:34:39.469053  559927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.801481487s)
	I0927 00:34:39.469062  559927 addons.go:475] Verifying addon metrics-server=true in "addons-220192"
	I0927 00:34:39.469120  559927 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.671373109s)
	I0927 00:34:39.469132  559927 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0927 00:34:39.469138  559927 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.671602913s)
	I0927 00:34:39.469967  559927 node_ready.go:35] waiting up to 6m0s for node "addons-220192" to be "Ready" ...
	I0927 00:34:39.472151  559927 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-220192 service yakd-dashboard -n yakd-dashboard
	
	I0927 00:34:39.472243  559927 out.go:177] * Verifying ingress addon...
	I0927 00:34:39.472289  559927 out.go:177] * Verifying registry addon...
	I0927 00:34:39.475538  559927 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0927 00:34:39.476423  559927 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0927 00:34:39.494665  559927 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0927 00:34:39.494694  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:39.496798  559927 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0927 00:34:39.496825  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0927 00:34:39.511262  559927 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0927 00:34:39.579923  559927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.328744084s)
	I0927 00:34:39.580128  559927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.646707554s)
	W0927 00:34:39.580156  559927 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0927 00:34:39.580183  559927 retry.go:31] will retry after 283.440734ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0927 00:34:39.831932  559927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.335725047s)
	I0927 00:34:39.831979  559927 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-220192"
	I0927 00:34:39.836412  559927 out.go:177] * Verifying csi-hostpath-driver addon...
	I0927 00:34:39.840109  559927 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0927 00:34:39.846548  559927 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0927 00:34:39.846621  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:39.864697  559927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 00:34:40.005609  559927 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-220192" context rescaled to 1 replicas
	I0927 00:34:40.006033  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:40.013393  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:40.344695  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:40.482976  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:40.484052  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:40.844800  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:40.983568  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:40.985312  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:41.344228  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:41.473653  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:34:41.480232  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:41.481108  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:41.844824  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:41.984071  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:41.984993  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:42.344135  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:42.481608  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:42.482992  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:42.819929  559927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.955146397s)
	I0927 00:34:42.845156  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:42.980034  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:42.980570  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:43.344660  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:43.464416  559927 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0927 00:34:43.464573  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:43.474433  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:34:43.481829  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:43.483496  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:43.483835  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:43.590588  559927 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0927 00:34:43.609201  559927 addons.go:234] Setting addon gcp-auth=true in "addons-220192"
	I0927 00:34:43.609254  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:43.609751  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:43.626431  559927 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0927 00:34:43.626487  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:43.644327  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:43.741116  559927 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0927 00:34:43.743530  559927 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0927 00:34:43.746014  559927 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0927 00:34:43.746031  559927 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0927 00:34:43.763769  559927 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0927 00:34:43.763793  559927 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0927 00:34:43.780969  559927 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0927 00:34:43.780996  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0927 00:34:43.799112  559927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0927 00:34:43.844675  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:43.980511  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:43.982005  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:44.322692  559927 addons.go:475] Verifying addon gcp-auth=true in "addons-220192"
	I0927 00:34:44.325770  559927 out.go:177] * Verifying gcp-auth addon...
	I0927 00:34:44.329465  559927 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0927 00:34:44.333766  559927 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0927 00:34:44.333790  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:44.344656  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:44.479817  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:44.480105  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:44.832869  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:44.844511  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:44.979614  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:44.980284  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:45.332817  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:45.343965  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:45.479741  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:45.481120  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:45.832899  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:45.844116  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:45.973317  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:34:45.979458  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:45.980299  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:46.332489  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:46.343738  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:46.479974  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:46.480735  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:46.833062  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:46.843843  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:46.979508  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:46.980073  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:47.333256  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:47.343452  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:47.479659  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:47.480382  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:47.832663  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:47.843746  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:47.973598  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:34:47.982398  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:47.983191  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:48.333001  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:48.343792  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:48.480415  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:48.480692  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:48.833104  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:48.843760  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:48.979483  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:48.980880  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:49.333641  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:49.344144  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:49.480257  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:49.483517  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:49.833431  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:49.844206  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:49.991059  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:34:49.992115  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:49.992352  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:50.332707  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:50.344159  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:50.480722  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:50.481738  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:50.833298  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:50.843495  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:50.979455  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:50.981405  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:51.334674  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:51.344002  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:51.479106  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:51.480280  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:51.833792  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:51.844086  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:51.982704  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:51.983622  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:52.333240  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:52.343403  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:52.474546  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:34:52.479449  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:52.482139  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:52.832804  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:52.843907  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:52.979328  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:52.980447  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:53.333021  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:53.343677  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:53.479431  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:53.480526  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:53.832723  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:53.843485  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:53.979263  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:53.979973  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:54.333522  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:54.348182  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:54.479005  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:54.480787  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:54.832509  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:54.844676  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:54.974064  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:34:54.979672  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:54.980722  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:55.333594  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:55.343740  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:55.479360  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:55.480245  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:55.832680  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:55.843543  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:55.979952  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:55.980389  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:56.332637  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:56.344144  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:56.479599  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:56.480801  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:56.832314  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:56.843591  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:56.979818  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:56.982964  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:57.333340  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:57.343648  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:57.473718  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:34:57.479686  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:57.480106  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:57.833276  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:57.843837  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:57.980259  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:57.980971  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:58.332941  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:58.344198  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:58.479441  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:58.480562  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:58.832511  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:58.843959  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:58.979304  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:58.979902  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:59.332471  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:59.343688  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:59.473837  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:34:59.480105  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:59.480820  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:59.833342  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:59.844089  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:59.979965  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:59.980877  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:00.334431  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:00.344836  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:00.479625  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:00.481083  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:00.833462  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:00.844379  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:00.979507  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:00.980347  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:01.333369  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:01.344056  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:01.480874  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:01.481106  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:01.833477  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:01.843808  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:01.973440  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:35:01.981517  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:01.981736  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:02.332928  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:02.344231  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:02.479408  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:02.480259  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:02.832727  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:02.843980  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:02.979737  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:02.980467  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:03.332964  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:03.343740  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:03.479543  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:03.480087  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:03.833215  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:03.844240  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:03.974500  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:35:03.980031  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:03.981606  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:04.332668  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:04.343749  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:04.479236  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:04.480360  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:04.833389  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:04.844094  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:04.980186  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:04.980297  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:05.332559  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:05.343815  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:05.479519  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:05.480644  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:05.832634  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:05.843675  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:05.979646  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:05.980528  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:06.332905  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:06.344008  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:06.473815  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:35:06.480097  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:06.480815  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:06.833469  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:06.844027  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:06.979148  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:06.980069  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:07.332568  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:07.343773  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:07.479920  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:07.479969  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:07.833963  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:07.843803  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:07.980212  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:07.980996  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:08.333337  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:08.343626  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:08.479786  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:08.480531  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:08.832973  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:08.844021  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:08.973090  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:35:08.980044  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:08.980573  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:09.332531  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:09.348321  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:09.479485  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:09.479813  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:09.833068  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:09.844031  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:09.979535  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:09.981261  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:10.333874  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:10.354135  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:10.484607  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:10.485964  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:10.832728  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:10.844943  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:10.973698  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:35:10.980277  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:10.980859  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:11.333372  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:11.345921  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:11.479342  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:11.480218  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:11.833074  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:11.844071  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:11.979619  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:11.981229  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:12.333379  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:12.344154  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:12.480895  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:12.481142  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:12.833217  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:12.843423  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:12.979301  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:12.980351  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:13.337392  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:13.343917  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:13.473805  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:35:13.479743  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:13.481489  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:13.832829  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:13.844071  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:13.979477  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:13.980477  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:14.332885  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:14.343685  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:14.479765  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:14.480539  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:14.832829  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:14.843971  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:14.980105  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:14.980578  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:15.332551  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:15.343348  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:15.479922  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:15.480686  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:15.833208  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:15.843933  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:15.973782  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:35:15.979898  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:15.980469  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:16.333214  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:16.344108  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:16.479743  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:16.480603  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:16.833361  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:16.843717  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:16.979315  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:16.980756  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:17.333389  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:17.343864  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:17.480054  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:17.480955  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:17.833334  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:17.843911  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:17.979629  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:17.980181  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:18.332516  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:18.343396  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:18.473097  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:35:18.479374  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:18.479963  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:18.832640  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:18.844049  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:18.996522  559927 node_ready.go:49] node "addons-220192" has status "Ready":"True"
	I0927 00:35:18.996599  559927 node_ready.go:38] duration metric: took 39.526610666s for node "addons-220192" to be "Ready" ...
	I0927 00:35:18.996626  559927 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 00:35:19.019040  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:19.023994  559927 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0927 00:35:19.024068  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:19.032376  559927 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-wnhpd" in "kube-system" namespace to be "Ready" ...
	I0927 00:35:19.398908  559927 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0927 00:35:19.398987  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:19.399566  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:19.483156  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:19.490619  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:19.833611  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:19.852005  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:20.016049  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:20.016250  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:20.347509  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:20.351821  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:20.481433  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:20.482332  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:20.542199  559927 pod_ready.go:93] pod "coredns-7c65d6cfc9-wnhpd" in "kube-system" namespace has status "Ready":"True"
	I0927 00:35:20.542229  559927 pod_ready.go:82] duration metric: took 1.509780007s for pod "coredns-7c65d6cfc9-wnhpd" in "kube-system" namespace to be "Ready" ...
	I0927 00:35:20.542251  559927 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-220192" in "kube-system" namespace to be "Ready" ...
	I0927 00:35:20.548166  559927 pod_ready.go:93] pod "etcd-addons-220192" in "kube-system" namespace has status "Ready":"True"
	I0927 00:35:20.548192  559927 pod_ready.go:82] duration metric: took 5.932914ms for pod "etcd-addons-220192" in "kube-system" namespace to be "Ready" ...
	I0927 00:35:20.548207  559927 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-220192" in "kube-system" namespace to be "Ready" ...
	I0927 00:35:20.553717  559927 pod_ready.go:93] pod "kube-apiserver-addons-220192" in "kube-system" namespace has status "Ready":"True"
	I0927 00:35:20.553741  559927 pod_ready.go:82] duration metric: took 5.524718ms for pod "kube-apiserver-addons-220192" in "kube-system" namespace to be "Ready" ...
	I0927 00:35:20.553754  559927 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-220192" in "kube-system" namespace to be "Ready" ...
	I0927 00:35:20.559029  559927 pod_ready.go:93] pod "kube-controller-manager-addons-220192" in "kube-system" namespace has status "Ready":"True"
	I0927 00:35:20.559057  559927 pod_ready.go:82] duration metric: took 5.294414ms for pod "kube-controller-manager-addons-220192" in "kube-system" namespace to be "Ready" ...
	I0927 00:35:20.559071  559927 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-shqd9" in "kube-system" namespace to be "Ready" ...
	I0927 00:35:20.573997  559927 pod_ready.go:93] pod "kube-proxy-shqd9" in "kube-system" namespace has status "Ready":"True"
	I0927 00:35:20.574023  559927 pod_ready.go:82] duration metric: took 14.944163ms for pod "kube-proxy-shqd9" in "kube-system" namespace to be "Ready" ...
	I0927 00:35:20.574036  559927 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-220192" in "kube-system" namespace to be "Ready" ...
	I0927 00:35:20.833824  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:20.848660  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:20.974442  559927 pod_ready.go:93] pod "kube-scheduler-addons-220192" in "kube-system" namespace has status "Ready":"True"
	I0927 00:35:20.974470  559927 pod_ready.go:82] duration metric: took 400.425942ms for pod "kube-scheduler-addons-220192" in "kube-system" namespace to be "Ready" ...
	I0927 00:35:20.974484  559927 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace to be "Ready" ...
	I0927 00:35:20.982452  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:20.984121  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:21.333221  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:21.345136  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:21.482607  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:21.483622  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:21.833129  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:21.845258  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:21.981612  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:21.982849  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:22.333804  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:22.345228  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:22.481208  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:22.482132  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:22.833026  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:22.845328  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:22.980591  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:22.981225  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:22.984148  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:23.332828  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:23.345437  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:23.480956  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:23.481629  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:23.833324  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:23.845811  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:23.980489  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:23.981126  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:24.334215  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:24.345777  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:24.492856  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:24.501358  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:24.833375  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:24.845765  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:24.984320  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:24.985535  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:25.333030  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:25.346129  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:25.483387  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:25.483462  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:25.491536  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:25.833367  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:25.845582  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:25.986028  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:25.987700  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:26.333088  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:26.347436  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:26.482707  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:26.485635  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:26.835052  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:26.936552  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:26.991369  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:26.993292  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:27.333040  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:27.349818  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:27.490040  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:27.500797  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:27.502364  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:27.833179  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:27.844956  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:27.987680  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:27.989267  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:28.334430  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:28.345015  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:28.482024  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:28.482969  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:28.834146  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:28.845784  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:28.981547  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:28.987897  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:29.332824  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:29.345018  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:29.481343  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:29.483392  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:29.833401  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:29.845939  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:29.983969  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:29.986347  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:29.991317  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:30.333446  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:30.344995  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:30.508060  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:30.509114  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:30.833954  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:30.847331  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:30.983296  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:30.984469  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:31.333529  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:31.346615  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:31.483463  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:31.485699  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:31.834409  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:31.847606  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:31.990264  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:31.991499  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:31.995169  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:32.333938  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:32.345440  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:32.493919  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:32.495619  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:32.838133  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:32.848315  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:33.004360  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:33.006597  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:33.334374  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:33.348157  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:33.487589  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:33.488353  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:33.833623  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:33.845948  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:34.000333  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:34.002102  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:34.006988  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:34.352293  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:34.359508  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:34.502221  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:34.503150  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:34.835304  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:34.865176  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:34.985218  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:34.985823  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:35.334075  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:35.345971  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:35.483800  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:35.491250  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:35.833110  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:35.846328  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:35.979803  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:35.982985  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:36.335407  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:36.345098  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:36.481660  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:36.481954  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:36.483328  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:36.832836  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:36.844919  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:36.982758  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:36.984021  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:37.332859  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:37.344703  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:37.479523  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:37.482358  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:37.833392  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:37.845097  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:37.981768  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:37.982364  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:38.333562  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:38.346750  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:38.538171  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:38.539659  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:38.574486  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:38.833410  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:38.845154  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:38.983941  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:38.986331  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:39.333236  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:39.344860  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:39.487423  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:39.488653  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:39.833699  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:39.845135  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:39.982293  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:39.983320  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:40.334049  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:40.345576  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:40.487727  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:40.489357  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:40.850545  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:40.869817  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:40.988622  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:40.997340  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:40.999067  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:41.333838  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:41.344941  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:41.481094  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:41.482258  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:41.833163  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:41.844771  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:41.983305  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:41.984333  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:42.334272  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:42.345229  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:42.492644  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:42.493566  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:42.832709  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:42.851142  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:42.983002  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:42.987339  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:43.333193  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:43.345053  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:43.483125  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:43.484113  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:43.488641  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:43.833337  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:43.845279  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:43.980602  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:43.984005  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:44.333444  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:44.345218  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:44.481670  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:44.482647  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:44.835774  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:44.845367  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:44.995835  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:44.998309  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:45.333453  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:45.345157  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:45.480354  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:45.484276  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:45.833765  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:45.845022  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:45.982788  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:45.986074  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:45.988189  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:46.333646  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:46.346350  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:46.491046  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:46.492583  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:46.835571  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:46.846801  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:46.981975  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:46.983265  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:47.333111  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:47.345419  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:47.484650  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:47.489278  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:47.832786  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:47.845960  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:47.991387  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:47.992583  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:48.333677  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:48.347026  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:48.492253  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:48.493184  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:48.499877  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:48.833921  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:48.845808  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:48.979562  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:48.982627  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:49.333741  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:49.344581  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:49.480529  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:49.480919  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:49.833732  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:49.845393  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:49.981677  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:49.982936  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:50.333400  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:50.346044  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:50.480790  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:50.483023  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:50.833421  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:50.849074  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:50.981931  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:50.989634  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:50.995853  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:51.334696  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:51.348991  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:51.491426  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:51.492330  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:51.833618  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:51.844626  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:51.984195  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:51.985302  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:52.334919  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:52.344890  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:52.483430  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:52.484577  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:52.833804  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:52.845966  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:52.980535  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:52.981657  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:53.333493  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:53.345580  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:53.481301  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:53.482899  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:53.483553  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:53.833110  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:53.845938  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:53.996740  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:53.998174  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:54.334265  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:54.345544  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:54.488077  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:54.489088  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:54.833856  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:54.846893  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:54.982313  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:54.984449  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:55.333590  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:55.345439  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:55.481901  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:55.483756  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:55.484959  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:55.833795  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:55.846912  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:55.985194  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:55.986869  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:56.332981  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:56.345961  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:56.484347  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:56.485464  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:56.834149  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:56.849037  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:56.982925  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:56.986831  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:57.333287  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:57.344956  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:57.481955  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:57.492325  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:57.493676  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:57.833426  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:57.844766  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:57.982873  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:57.984241  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:58.334364  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:58.346131  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:58.492147  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:58.492947  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:58.834054  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:58.853019  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:58.991069  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:58.992535  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:59.333737  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:59.346124  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:59.495213  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:59.495807  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:59.496471  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:59.833938  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:59.845169  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:59.983223  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:59.984276  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:00.333940  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:00.345113  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:00.481959  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:00.482968  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:36:00.834016  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:00.845460  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:00.984100  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:36:00.985224  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:01.332734  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:01.344581  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:01.486492  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:36:01.487076  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:01.833007  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:01.844703  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:01.981792  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:01.982901  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:01.983764  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:36:02.334761  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:02.345539  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:02.487447  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:02.491829  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:36:02.834434  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:02.845976  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:02.984185  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:36:02.987921  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:03.334009  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:03.345775  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:03.481785  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:03.481999  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:36:03.834559  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:03.846410  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:03.982047  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:03.983140  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:36:03.986356  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:04.334016  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:04.345507  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:04.482381  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:36:04.483345  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:04.833296  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:04.845032  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:04.983083  559927 kapi.go:107] duration metric: took 1m25.506656031s to wait for kubernetes.io/minikube-addons=registry ...
	I0927 00:36:04.983755  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:05.334049  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:05.345555  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:05.480336  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:05.833772  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:05.845009  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:05.982793  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:06.334193  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:06.346939  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:06.482860  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:06.484360  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:06.833274  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:06.844879  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:06.982428  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:07.332952  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:07.347731  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:07.482480  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:07.833289  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:07.844648  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:07.980267  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:08.333858  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:08.345865  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:08.481076  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:08.483138  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:08.835184  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:08.845444  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:08.987050  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:09.334706  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:09.348925  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:09.482708  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:09.834286  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:09.845038  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:09.986190  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:10.333090  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:10.344775  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:10.480737  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:10.833646  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:10.846522  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:10.982188  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:10.982779  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:11.333798  559927 kapi.go:107] duration metric: took 1m27.004325034s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0927 00:36:11.335762  559927 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-220192 cluster.
	I0927 00:36:11.337808  559927 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0927 00:36:11.339463  559927 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0927 00:36:11.344308  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:11.480962  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:11.846998  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:11.989166  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:12.345349  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:12.483345  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:12.845611  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:12.985783  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:12.987818  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:13.345705  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:13.483215  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:13.844991  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:13.984190  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:14.345761  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:14.483266  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:14.848904  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:14.983719  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:15.344480  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:15.486603  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:15.492777  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:15.846650  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:15.979870  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:16.345708  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:16.480932  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:16.845136  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:16.982088  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:17.345624  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:17.482482  559927 kapi.go:107] duration metric: took 1m38.006940645s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0927 00:36:17.844816  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:17.984704  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:18.345226  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:18.845178  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:19.349482  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:19.846085  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:20.349935  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:20.481081  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:20.845700  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:21.345969  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:21.844863  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:22.345753  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:22.845200  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:22.981147  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:23.346423  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:23.845463  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:24.345795  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:24.845049  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:25.345257  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:25.484602  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:25.846829  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:26.347013  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:26.845138  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:27.345506  559927 kapi.go:107] duration metric: took 1m47.50539711s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0927 00:36:27.348020  559927 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, storage-provisioner, ingress-dns, metrics-server, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0927 00:36:27.351337  559927 addons.go:510] duration metric: took 1m53.969914524s for enable addons: enabled=[cloud-spanner nvidia-device-plugin storage-provisioner ingress-dns metrics-server yakd default-storageclass inspektor-gadget volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0927 00:36:27.980368  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:29.982001  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:32.481885  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:34.980951  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:36.981764  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:39.480929  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:39.981626  559927 pod_ready.go:93] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"True"
	I0927 00:36:39.981655  559927 pod_ready.go:82] duration metric: took 1m19.007136304s for pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace to be "Ready" ...
	I0927 00:36:39.981668  559927 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-dqrvw" in "kube-system" namespace to be "Ready" ...
	I0927 00:36:39.986994  559927 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-dqrvw" in "kube-system" namespace has status "Ready":"True"
	I0927 00:36:39.987021  559927 pod_ready.go:82] duration metric: took 5.342068ms for pod "nvidia-device-plugin-daemonset-dqrvw" in "kube-system" namespace to be "Ready" ...
	I0927 00:36:39.987044  559927 pod_ready.go:39] duration metric: took 1m20.990388006s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 00:36:39.987060  559927 api_server.go:52] waiting for apiserver process to appear ...
	I0927 00:36:39.987091  559927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 00:36:39.987152  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 00:36:40.044709  559927 cri.go:89] found id: "04b9c719c715f318e0da018097c22f147000bd0fb64d781731fa9eb3b3c51395"
	I0927 00:36:40.044730  559927 cri.go:89] found id: ""
	I0927 00:36:40.044737  559927 logs.go:276] 1 containers: [04b9c719c715f318e0da018097c22f147000bd0fb64d781731fa9eb3b3c51395]
	I0927 00:36:40.044793  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:36:40.049159  559927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 00:36:40.049232  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 00:36:40.092137  559927 cri.go:89] found id: "6b36b1e46732bafa997c4e66766a4bb0cd5ea7487006b7a6ba9e5860f1743a6e"
	I0927 00:36:40.092160  559927 cri.go:89] found id: ""
	I0927 00:36:40.092168  559927 logs.go:276] 1 containers: [6b36b1e46732bafa997c4e66766a4bb0cd5ea7487006b7a6ba9e5860f1743a6e]
	I0927 00:36:40.092226  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:36:40.095880  559927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 00:36:40.095952  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 00:36:40.136619  559927 cri.go:89] found id: "1a8d7c13a871933275d3e84e87c063e55c9ed4adff23be36d5ea4bfa8accbcd6"
	I0927 00:36:40.136643  559927 cri.go:89] found id: ""
	I0927 00:36:40.136651  559927 logs.go:276] 1 containers: [1a8d7c13a871933275d3e84e87c063e55c9ed4adff23be36d5ea4bfa8accbcd6]
	I0927 00:36:40.136728  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:36:40.140255  559927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 00:36:40.140338  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 00:36:40.191576  559927 cri.go:89] found id: "555dc55ff545e165f45bde68c31f0843d0f21041ba3fea37def560aea920dcc5"
	I0927 00:36:40.191596  559927 cri.go:89] found id: ""
	I0927 00:36:40.191603  559927 logs.go:276] 1 containers: [555dc55ff545e165f45bde68c31f0843d0f21041ba3fea37def560aea920dcc5]
	I0927 00:36:40.191664  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:36:40.195147  559927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 00:36:40.195228  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 00:36:40.232473  559927 cri.go:89] found id: "5e3fe54c99e931cc6b0b654e967a2638c30374abdabe2c1174d5f6a3fff11315"
	I0927 00:36:40.232496  559927 cri.go:89] found id: ""
	I0927 00:36:40.232504  559927 logs.go:276] 1 containers: [5e3fe54c99e931cc6b0b654e967a2638c30374abdabe2c1174d5f6a3fff11315]
	I0927 00:36:40.232560  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:36:40.236094  559927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 00:36:40.236166  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 00:36:40.273140  559927 cri.go:89] found id: "2bfc8d78fdf58256c4a5925537af21cdbf3dbd66127f8a15b8101f92fb8a78c9"
	I0927 00:36:40.273163  559927 cri.go:89] found id: ""
	I0927 00:36:40.273170  559927 logs.go:276] 1 containers: [2bfc8d78fdf58256c4a5925537af21cdbf3dbd66127f8a15b8101f92fb8a78c9]
	I0927 00:36:40.273258  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:36:40.276617  559927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 00:36:40.276695  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 00:36:40.313852  559927 cri.go:89] found id: "d7a7261efecf3162ccc2d26ed432451c900af8b4d1487407d7ce2be5094281b5"
	I0927 00:36:40.313876  559927 cri.go:89] found id: ""
	I0927 00:36:40.313885  559927 logs.go:276] 1 containers: [d7a7261efecf3162ccc2d26ed432451c900af8b4d1487407d7ce2be5094281b5]
	I0927 00:36:40.313941  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:36:40.317368  559927 logs.go:123] Gathering logs for kubelet ...
	I0927 00:36:40.317391  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 00:36:40.354686  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:35 addons-220192 kubelet[1511]: W0927 00:34:35.883351    1511 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-220192" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object
	W0927 00:36:40.354935  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:35 addons-220192 kubelet[1511]: E0927 00:34:35.883402    1511 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-220192\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-220192' and this object" logger="UnhandledError"
	W0927 00:36:40.355126  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:35 addons-220192 kubelet[1511]: W0927 00:34:35.916164    1511 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-220192" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object
	W0927 00:36:40.355357  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:35 addons-220192 kubelet[1511]: E0927 00:34:35.916217    1511 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-220192\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-220192' and this object" logger="UnhandledError"
	W0927 00:36:40.356718  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.200959    1511 projected.go:194] Error preparing data for projected volume kube-api-access-8sq56 for pod kube-system/kindnet-4rr4t: [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0927 00:36:40.357232  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.201058    1511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/afd40f83-7a79-4edc-bbfc-ff6936a3158e-kube-api-access-8sq56 podName:afd40f83-7a79-4edc-bbfc-ff6936a3158e nodeName:}" failed. No retries permitted until 2024-09-27 00:34:37.70103236 +0000 UTC m=+8.728897654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8sq56" (UniqueName: "kubernetes.io/projected/afd40f83-7a79-4edc-bbfc-ff6936a3158e-kube-api-access-8sq56") pod "kindnet-4rr4t" (UID: "afd40f83-7a79-4edc-bbfc-ff6936a3158e") : [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condi
tion]
	W0927 00:36:40.357591  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.420880    1511 projected.go:194] Error preparing data for projected volume kube-api-access-pfjql for pod kube-system/kube-proxy-shqd9: [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0927 00:36:40.358101  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.420948    1511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/476cb0de-772b-4e25-ac8c-7244a6d392e7-kube-api-access-pfjql podName:476cb0de-772b-4e25-ac8c-7244a6d392e7 nodeName:}" failed. No retries permitted until 2024-09-27 00:34:37.920927906 +0000 UTC m=+8.948793201 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pfjql" (UniqueName: "kubernetes.io/projected/476cb0de-772b-4e25-ac8c-7244a6d392e7-kube-api-access-pfjql") pod "kube-proxy-shqd9" (UID: "476cb0de-772b-4e25-ac8c-7244a6d392e7") : [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for th
e condition]
	I0927 00:36:40.415196  559927 logs.go:123] Gathering logs for kube-controller-manager [2bfc8d78fdf58256c4a5925537af21cdbf3dbd66127f8a15b8101f92fb8a78c9] ...
	I0927 00:36:40.415235  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2bfc8d78fdf58256c4a5925537af21cdbf3dbd66127f8a15b8101f92fb8a78c9"
	I0927 00:36:40.520289  559927 logs.go:123] Gathering logs for kindnet [d7a7261efecf3162ccc2d26ed432451c900af8b4d1487407d7ce2be5094281b5] ...
	I0927 00:36:40.520324  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7a7261efecf3162ccc2d26ed432451c900af8b4d1487407d7ce2be5094281b5"
	I0927 00:36:40.569490  559927 logs.go:123] Gathering logs for kube-scheduler [555dc55ff545e165f45bde68c31f0843d0f21041ba3fea37def560aea920dcc5] ...
	I0927 00:36:40.569523  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 555dc55ff545e165f45bde68c31f0843d0f21041ba3fea37def560aea920dcc5"
	I0927 00:36:40.620143  559927 logs.go:123] Gathering logs for kube-proxy [5e3fe54c99e931cc6b0b654e967a2638c30374abdabe2c1174d5f6a3fff11315] ...
	I0927 00:36:40.620183  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e3fe54c99e931cc6b0b654e967a2638c30374abdabe2c1174d5f6a3fff11315"
	I0927 00:36:40.663881  559927 logs.go:123] Gathering logs for CRI-O ...
	I0927 00:36:40.663911  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 00:36:40.763619  559927 logs.go:123] Gathering logs for dmesg ...
	I0927 00:36:40.763658  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 00:36:40.779898  559927 logs.go:123] Gathering logs for describe nodes ...
	I0927 00:36:40.779926  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 00:36:40.969685  559927 logs.go:123] Gathering logs for kube-apiserver [04b9c719c715f318e0da018097c22f147000bd0fb64d781731fa9eb3b3c51395] ...
	I0927 00:36:40.969715  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04b9c719c715f318e0da018097c22f147000bd0fb64d781731fa9eb3b3c51395"
	I0927 00:36:41.024968  559927 logs.go:123] Gathering logs for etcd [6b36b1e46732bafa997c4e66766a4bb0cd5ea7487006b7a6ba9e5860f1743a6e] ...
	I0927 00:36:41.025001  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b36b1e46732bafa997c4e66766a4bb0cd5ea7487006b7a6ba9e5860f1743a6e"
	I0927 00:36:41.081642  559927 logs.go:123] Gathering logs for coredns [1a8d7c13a871933275d3e84e87c063e55c9ed4adff23be36d5ea4bfa8accbcd6] ...
	I0927 00:36:41.081676  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a8d7c13a871933275d3e84e87c063e55c9ed4adff23be36d5ea4bfa8accbcd6"
	I0927 00:36:41.120059  559927 logs.go:123] Gathering logs for container status ...
	I0927 00:36:41.120093  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 00:36:41.178658  559927 out.go:358] Setting ErrFile to fd 2...
	I0927 00:36:41.178684  559927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 00:36:41.178749  559927 out.go:270] X Problems detected in kubelet:
	W0927 00:36:41.178763  559927 out.go:270]   Sep 27 00:34:35 addons-220192 kubelet[1511]: E0927 00:34:35.916217    1511 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-220192\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-220192' and this object" logger="UnhandledError"
	W0927 00:36:41.178772  559927 out.go:270]   Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.200959    1511 projected.go:194] Error preparing data for projected volume kube-api-access-8sq56 for pod kube-system/kindnet-4rr4t: [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0927 00:36:41.178787  559927 out.go:270]   Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.201058    1511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/afd40f83-7a79-4edc-bbfc-ff6936a3158e-kube-api-access-8sq56 podName:afd40f83-7a79-4edc-bbfc-ff6936a3158e nodeName:}" failed. No retries permitted until 2024-09-27 00:34:37.70103236 +0000 UTC m=+8.728897654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8sq56" (UniqueName: "kubernetes.io/projected/afd40f83-7a79-4edc-bbfc-ff6936a3158e-kube-api-access-8sq56") pod "kindnet-4rr4t" (UID: "afd40f83-7a79-4edc-bbfc-ff6936a3158e") : [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0927 00:36:41.178794  559927 out.go:270]   Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.420880    1511 projected.go:194] Error preparing data for projected volume kube-api-access-pfjql for pod kube-system/kube-proxy-shqd9: [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0927 00:36:41.178804  559927 out.go:270]   Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.420948    1511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/476cb0de-772b-4e25-ac8c-7244a6d392e7-kube-api-access-pfjql podName:476cb0de-772b-4e25-ac8c-7244a6d392e7 nodeName:}" failed. No retries permitted until 2024-09-27 00:34:37.920927906 +0000 UTC m=+8.948793201 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pfjql" (UniqueName: "kubernetes.io/projected/476cb0de-772b-4e25-ac8c-7244a6d392e7-kube-api-access-pfjql") pod "kube-proxy-shqd9" (UID: "476cb0de-772b-4e25-ac8c-7244a6d392e7") : [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	I0927 00:36:41.178810  559927 out.go:358] Setting ErrFile to fd 2...
	I0927 00:36:41.178816  559927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:36:51.180508  559927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 00:36:51.193914  559927 api_server.go:72] duration metric: took 2m17.812908825s to wait for apiserver process to appear ...
	I0927 00:36:51.193938  559927 api_server.go:88] waiting for apiserver healthz status ...
	I0927 00:36:51.193970  559927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 00:36:51.194024  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 00:36:51.258037  559927 cri.go:89] found id: "04b9c719c715f318e0da018097c22f147000bd0fb64d781731fa9eb3b3c51395"
	I0927 00:36:51.258058  559927 cri.go:89] found id: ""
	I0927 00:36:51.258066  559927 logs.go:276] 1 containers: [04b9c719c715f318e0da018097c22f147000bd0fb64d781731fa9eb3b3c51395]
	I0927 00:36:51.258120  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:36:51.261573  559927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 00:36:51.261654  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 00:36:51.300961  559927 cri.go:89] found id: "6b36b1e46732bafa997c4e66766a4bb0cd5ea7487006b7a6ba9e5860f1743a6e"
	I0927 00:36:51.300984  559927 cri.go:89] found id: ""
	I0927 00:36:51.300993  559927 logs.go:276] 1 containers: [6b36b1e46732bafa997c4e66766a4bb0cd5ea7487006b7a6ba9e5860f1743a6e]
	I0927 00:36:51.301047  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:36:51.304390  559927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 00:36:51.304462  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 00:36:51.344486  559927 cri.go:89] found id: "1a8d7c13a871933275d3e84e87c063e55c9ed4adff23be36d5ea4bfa8accbcd6"
	I0927 00:36:51.344509  559927 cri.go:89] found id: ""
	I0927 00:36:51.344517  559927 logs.go:276] 1 containers: [1a8d7c13a871933275d3e84e87c063e55c9ed4adff23be36d5ea4bfa8accbcd6]
	I0927 00:36:51.344572  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:36:51.348065  559927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 00:36:51.348139  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 00:36:51.384964  559927 cri.go:89] found id: "555dc55ff545e165f45bde68c31f0843d0f21041ba3fea37def560aea920dcc5"
	I0927 00:36:51.384988  559927 cri.go:89] found id: ""
	I0927 00:36:51.384996  559927 logs.go:276] 1 containers: [555dc55ff545e165f45bde68c31f0843d0f21041ba3fea37def560aea920dcc5]
	I0927 00:36:51.385080  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:36:51.388530  559927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 00:36:51.388601  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 00:36:51.426096  559927 cri.go:89] found id: "5e3fe54c99e931cc6b0b654e967a2638c30374abdabe2c1174d5f6a3fff11315"
	I0927 00:36:51.426119  559927 cri.go:89] found id: ""
	I0927 00:36:51.426127  559927 logs.go:276] 1 containers: [5e3fe54c99e931cc6b0b654e967a2638c30374abdabe2c1174d5f6a3fff11315]
	I0927 00:36:51.426183  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:36:51.429629  559927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 00:36:51.429716  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 00:36:51.466515  559927 cri.go:89] found id: "2bfc8d78fdf58256c4a5925537af21cdbf3dbd66127f8a15b8101f92fb8a78c9"
	I0927 00:36:51.466536  559927 cri.go:89] found id: ""
	I0927 00:36:51.466544  559927 logs.go:276] 1 containers: [2bfc8d78fdf58256c4a5925537af21cdbf3dbd66127f8a15b8101f92fb8a78c9]
	I0927 00:36:51.466604  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:36:51.470090  559927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 00:36:51.470164  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 00:36:51.509078  559927 cri.go:89] found id: "d7a7261efecf3162ccc2d26ed432451c900af8b4d1487407d7ce2be5094281b5"
	I0927 00:36:51.509100  559927 cri.go:89] found id: ""
	I0927 00:36:51.509107  559927 logs.go:276] 1 containers: [d7a7261efecf3162ccc2d26ed432451c900af8b4d1487407d7ce2be5094281b5]
	I0927 00:36:51.509161  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:36:51.512599  559927 logs.go:123] Gathering logs for kube-controller-manager [2bfc8d78fdf58256c4a5925537af21cdbf3dbd66127f8a15b8101f92fb8a78c9] ...
	I0927 00:36:51.512667  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2bfc8d78fdf58256c4a5925537af21cdbf3dbd66127f8a15b8101f92fb8a78c9"
	I0927 00:36:51.606345  559927 logs.go:123] Gathering logs for kindnet [d7a7261efecf3162ccc2d26ed432451c900af8b4d1487407d7ce2be5094281b5] ...
	I0927 00:36:51.606381  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7a7261efecf3162ccc2d26ed432451c900af8b4d1487407d7ce2be5094281b5"
	I0927 00:36:51.648842  559927 logs.go:123] Gathering logs for CRI-O ...
	I0927 00:36:51.648870  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 00:36:51.751992  559927 logs.go:123] Gathering logs for container status ...
	I0927 00:36:51.752031  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 00:36:51.802535  559927 logs.go:123] Gathering logs for kubelet ...
	I0927 00:36:51.802567  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 00:36:51.843443  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:35 addons-220192 kubelet[1511]: W0927 00:34:35.883351    1511 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-220192" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object
	W0927 00:36:51.843686  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:35 addons-220192 kubelet[1511]: E0927 00:34:35.883402    1511 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-220192\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-220192' and this object" logger="UnhandledError"
	W0927 00:36:51.843879  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:35 addons-220192 kubelet[1511]: W0927 00:34:35.916164    1511 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-220192" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object
	W0927 00:36:51.844104  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:35 addons-220192 kubelet[1511]: E0927 00:34:35.916217    1511 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-220192\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-220192' and this object" logger="UnhandledError"
	W0927 00:36:51.845476  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.200959    1511 projected.go:194] Error preparing data for projected volume kube-api-access-8sq56 for pod kube-system/kindnet-4rr4t: [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0927 00:36:51.845988  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.201058    1511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/afd40f83-7a79-4edc-bbfc-ff6936a3158e-kube-api-access-8sq56 podName:afd40f83-7a79-4edc-bbfc-ff6936a3158e nodeName:}" failed. No retries permitted until 2024-09-27 00:34:37.70103236 +0000 UTC m=+8.728897654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8sq56" (UniqueName: "kubernetes.io/projected/afd40f83-7a79-4edc-bbfc-ff6936a3158e-kube-api-access-8sq56") pod "kindnet-4rr4t" (UID: "afd40f83-7a79-4edc-bbfc-ff6936a3158e") : [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condi
tion]
	W0927 00:36:51.846347  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.420880    1511 projected.go:194] Error preparing data for projected volume kube-api-access-pfjql for pod kube-system/kube-proxy-shqd9: [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0927 00:36:51.846856  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.420948    1511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/476cb0de-772b-4e25-ac8c-7244a6d392e7-kube-api-access-pfjql podName:476cb0de-772b-4e25-ac8c-7244a6d392e7 nodeName:}" failed. No retries permitted until 2024-09-27 00:34:37.920927906 +0000 UTC m=+8.948793201 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pfjql" (UniqueName: "kubernetes.io/projected/476cb0de-772b-4e25-ac8c-7244a6d392e7-kube-api-access-pfjql") pod "kube-proxy-shqd9" (UID: "476cb0de-772b-4e25-ac8c-7244a6d392e7") : [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for th
e condition]
	I0927 00:36:51.904915  559927 logs.go:123] Gathering logs for dmesg ...
	I0927 00:36:51.904950  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 00:36:51.921815  559927 logs.go:123] Gathering logs for etcd [6b36b1e46732bafa997c4e66766a4bb0cd5ea7487006b7a6ba9e5860f1743a6e] ...
	I0927 00:36:51.921883  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b36b1e46732bafa997c4e66766a4bb0cd5ea7487006b7a6ba9e5860f1743a6e"
	I0927 00:36:51.982538  559927 logs.go:123] Gathering logs for coredns [1a8d7c13a871933275d3e84e87c063e55c9ed4adff23be36d5ea4bfa8accbcd6] ...
	I0927 00:36:51.982627  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a8d7c13a871933275d3e84e87c063e55c9ed4adff23be36d5ea4bfa8accbcd6"
	I0927 00:36:52.028370  559927 logs.go:123] Gathering logs for describe nodes ...
	I0927 00:36:52.028401  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 00:36:52.168300  559927 logs.go:123] Gathering logs for kube-apiserver [04b9c719c715f318e0da018097c22f147000bd0fb64d781731fa9eb3b3c51395] ...
	I0927 00:36:52.168332  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04b9c719c715f318e0da018097c22f147000bd0fb64d781731fa9eb3b3c51395"
	I0927 00:36:52.232001  559927 logs.go:123] Gathering logs for kube-scheduler [555dc55ff545e165f45bde68c31f0843d0f21041ba3fea37def560aea920dcc5] ...
	I0927 00:36:52.232037  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 555dc55ff545e165f45bde68c31f0843d0f21041ba3fea37def560aea920dcc5"
	I0927 00:36:52.282225  559927 logs.go:123] Gathering logs for kube-proxy [5e3fe54c99e931cc6b0b654e967a2638c30374abdabe2c1174d5f6a3fff11315] ...
	I0927 00:36:52.282254  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e3fe54c99e931cc6b0b654e967a2638c30374abdabe2c1174d5f6a3fff11315"
	I0927 00:36:52.325692  559927 out.go:358] Setting ErrFile to fd 2...
	I0927 00:36:52.325717  559927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 00:36:52.325772  559927 out.go:270] X Problems detected in kubelet:
	W0927 00:36:52.325789  559927 out.go:270]   Sep 27 00:34:35 addons-220192 kubelet[1511]: E0927 00:34:35.916217    1511 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-220192\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-220192' and this object" logger="UnhandledError"
	W0927 00:36:52.325804  559927 out.go:270]   Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.200959    1511 projected.go:194] Error preparing data for projected volume kube-api-access-8sq56 for pod kube-system/kindnet-4rr4t: [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0927 00:36:52.325811  559927 out.go:270]   Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.201058    1511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/afd40f83-7a79-4edc-bbfc-ff6936a3158e-kube-api-access-8sq56 podName:afd40f83-7a79-4edc-bbfc-ff6936a3158e nodeName:}" failed. No retries permitted until 2024-09-27 00:34:37.70103236 +0000 UTC m=+8.728897654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8sq56" (UniqueName: "kubernetes.io/projected/afd40f83-7a79-4edc-bbfc-ff6936a3158e-kube-api-access-8sq56") pod "kindnet-4rr4t" (UID: "afd40f83-7a79-4edc-bbfc-ff6936a3158e") : [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0927 00:36:52.325824  559927 out.go:270]   Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.420880    1511 projected.go:194] Error preparing data for projected volume kube-api-access-pfjql for pod kube-system/kube-proxy-shqd9: [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0927 00:36:52.325830  559927 out.go:270]   Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.420948    1511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/476cb0de-772b-4e25-ac8c-7244a6d392e7-kube-api-access-pfjql podName:476cb0de-772b-4e25-ac8c-7244a6d392e7 nodeName:}" failed. No retries permitted until 2024-09-27 00:34:37.920927906 +0000 UTC m=+8.948793201 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pfjql" (UniqueName: "kubernetes.io/projected/476cb0de-772b-4e25-ac8c-7244a6d392e7-kube-api-access-pfjql") pod "kube-proxy-shqd9" (UID: "476cb0de-772b-4e25-ac8c-7244a6d392e7") : [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	I0927 00:36:52.325836  559927 out.go:358] Setting ErrFile to fd 2...
	I0927 00:36:52.325846  559927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:37:02.327724  559927 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0927 00:37:02.335228  559927 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0927 00:37:02.336164  559927 api_server.go:141] control plane version: v1.31.1
	I0927 00:37:02.336197  559927 api_server.go:131] duration metric: took 11.142248149s to wait for apiserver health ...
	I0927 00:37:02.336207  559927 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 00:37:02.336227  559927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 00:37:02.336293  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 00:37:02.373662  559927 cri.go:89] found id: "04b9c719c715f318e0da018097c22f147000bd0fb64d781731fa9eb3b3c51395"
	I0927 00:37:02.373688  559927 cri.go:89] found id: ""
	I0927 00:37:02.373696  559927 logs.go:276] 1 containers: [04b9c719c715f318e0da018097c22f147000bd0fb64d781731fa9eb3b3c51395]
	I0927 00:37:02.373750  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:37:02.377092  559927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 00:37:02.377160  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 00:37:02.414236  559927 cri.go:89] found id: "6b36b1e46732bafa997c4e66766a4bb0cd5ea7487006b7a6ba9e5860f1743a6e"
	I0927 00:37:02.414265  559927 cri.go:89] found id: ""
	I0927 00:37:02.414279  559927 logs.go:276] 1 containers: [6b36b1e46732bafa997c4e66766a4bb0cd5ea7487006b7a6ba9e5860f1743a6e]
	I0927 00:37:02.414335  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:37:02.417663  559927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 00:37:02.417741  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 00:37:02.468306  559927 cri.go:89] found id: "1a8d7c13a871933275d3e84e87c063e55c9ed4adff23be36d5ea4bfa8accbcd6"
	I0927 00:37:02.468327  559927 cri.go:89] found id: ""
	I0927 00:37:02.468335  559927 logs.go:276] 1 containers: [1a8d7c13a871933275d3e84e87c063e55c9ed4adff23be36d5ea4bfa8accbcd6]
	I0927 00:37:02.468389  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:37:02.471964  559927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 00:37:02.472034  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 00:37:02.512245  559927 cri.go:89] found id: "555dc55ff545e165f45bde68c31f0843d0f21041ba3fea37def560aea920dcc5"
	I0927 00:37:02.512267  559927 cri.go:89] found id: ""
	I0927 00:37:02.512275  559927 logs.go:276] 1 containers: [555dc55ff545e165f45bde68c31f0843d0f21041ba3fea37def560aea920dcc5]
	I0927 00:37:02.512330  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:37:02.515876  559927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 00:37:02.515968  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 00:37:02.552023  559927 cri.go:89] found id: "5e3fe54c99e931cc6b0b654e967a2638c30374abdabe2c1174d5f6a3fff11315"
	I0927 00:37:02.552047  559927 cri.go:89] found id: ""
	I0927 00:37:02.552055  559927 logs.go:276] 1 containers: [5e3fe54c99e931cc6b0b654e967a2638c30374abdabe2c1174d5f6a3fff11315]
	I0927 00:37:02.552110  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:37:02.555592  559927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 00:37:02.555670  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 00:37:02.601327  559927 cri.go:89] found id: "2bfc8d78fdf58256c4a5925537af21cdbf3dbd66127f8a15b8101f92fb8a78c9"
	I0927 00:37:02.601351  559927 cri.go:89] found id: ""
	I0927 00:37:02.601359  559927 logs.go:276] 1 containers: [2bfc8d78fdf58256c4a5925537af21cdbf3dbd66127f8a15b8101f92fb8a78c9]
	I0927 00:37:02.601447  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:37:02.604953  559927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 00:37:02.605044  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 00:37:02.642635  559927 cri.go:89] found id: "d7a7261efecf3162ccc2d26ed432451c900af8b4d1487407d7ce2be5094281b5"
	I0927 00:37:02.642660  559927 cri.go:89] found id: ""
	I0927 00:37:02.642668  559927 logs.go:276] 1 containers: [d7a7261efecf3162ccc2d26ed432451c900af8b4d1487407d7ce2be5094281b5]
	I0927 00:37:02.642789  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:37:02.646380  559927 logs.go:123] Gathering logs for kube-controller-manager [2bfc8d78fdf58256c4a5925537af21cdbf3dbd66127f8a15b8101f92fb8a78c9] ...
	I0927 00:37:02.646406  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2bfc8d78fdf58256c4a5925537af21cdbf3dbd66127f8a15b8101f92fb8a78c9"
	I0927 00:37:02.718917  559927 logs.go:123] Gathering logs for kindnet [d7a7261efecf3162ccc2d26ed432451c900af8b4d1487407d7ce2be5094281b5] ...
	I0927 00:37:02.718956  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7a7261efecf3162ccc2d26ed432451c900af8b4d1487407d7ce2be5094281b5"
	I0927 00:37:02.761541  559927 logs.go:123] Gathering logs for container status ...
	I0927 00:37:02.761572  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 00:37:02.809548  559927 logs.go:123] Gathering logs for kubelet ...
	I0927 00:37:02.809580  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 00:37:02.853630  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:35 addons-220192 kubelet[1511]: W0927 00:34:35.883351    1511 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-220192" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object
	W0927 00:37:02.853910  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:35 addons-220192 kubelet[1511]: E0927 00:34:35.883402    1511 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-220192\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-220192' and this object" logger="UnhandledError"
	W0927 00:37:02.854104  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:35 addons-220192 kubelet[1511]: W0927 00:34:35.916164    1511 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-220192" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object
	W0927 00:37:02.854331  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:35 addons-220192 kubelet[1511]: E0927 00:34:35.916217    1511 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-220192\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-220192' and this object" logger="UnhandledError"
	W0927 00:37:02.855706  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.200959    1511 projected.go:194] Error preparing data for projected volume kube-api-access-8sq56 for pod kube-system/kindnet-4rr4t: [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0927 00:37:02.856214  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.201058    1511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/afd40f83-7a79-4edc-bbfc-ff6936a3158e-kube-api-access-8sq56 podName:afd40f83-7a79-4edc-bbfc-ff6936a3158e nodeName:}" failed. No retries permitted until 2024-09-27 00:34:37.70103236 +0000 UTC m=+8.728897654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8sq56" (UniqueName: "kubernetes.io/projected/afd40f83-7a79-4edc-bbfc-ff6936a3158e-kube-api-access-8sq56") pod "kindnet-4rr4t" (UID: "afd40f83-7a79-4edc-bbfc-ff6936a3158e") : [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condi
tion]
	W0927 00:37:02.856573  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.420880    1511 projected.go:194] Error preparing data for projected volume kube-api-access-pfjql for pod kube-system/kube-proxy-shqd9: [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0927 00:37:02.857089  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.420948    1511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/476cb0de-772b-4e25-ac8c-7244a6d392e7-kube-api-access-pfjql podName:476cb0de-772b-4e25-ac8c-7244a6d392e7 nodeName:}" failed. No retries permitted until 2024-09-27 00:34:37.920927906 +0000 UTC m=+8.948793201 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pfjql" (UniqueName: "kubernetes.io/projected/476cb0de-772b-4e25-ac8c-7244a6d392e7-kube-api-access-pfjql") pod "kube-proxy-shqd9" (UID: "476cb0de-772b-4e25-ac8c-7244a6d392e7") : [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for th
e condition]
	I0927 00:37:02.916418  559927 logs.go:123] Gathering logs for dmesg ...
	I0927 00:37:02.916455  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 00:37:02.932480  559927 logs.go:123] Gathering logs for kube-apiserver [04b9c719c715f318e0da018097c22f147000bd0fb64d781731fa9eb3b3c51395] ...
	I0927 00:37:02.932508  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04b9c719c715f318e0da018097c22f147000bd0fb64d781731fa9eb3b3c51395"
	I0927 00:37:03.002890  559927 logs.go:123] Gathering logs for kube-scheduler [555dc55ff545e165f45bde68c31f0843d0f21041ba3fea37def560aea920dcc5] ...
	I0927 00:37:03.002926  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 555dc55ff545e165f45bde68c31f0843d0f21041ba3fea37def560aea920dcc5"
	I0927 00:37:03.049813  559927 logs.go:123] Gathering logs for kube-proxy [5e3fe54c99e931cc6b0b654e967a2638c30374abdabe2c1174d5f6a3fff11315] ...
	I0927 00:37:03.049846  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e3fe54c99e931cc6b0b654e967a2638c30374abdabe2c1174d5f6a3fff11315"
	I0927 00:37:03.093274  559927 logs.go:123] Gathering logs for describe nodes ...
	I0927 00:37:03.093302  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 00:37:03.235228  559927 logs.go:123] Gathering logs for etcd [6b36b1e46732bafa997c4e66766a4bb0cd5ea7487006b7a6ba9e5860f1743a6e] ...
	I0927 00:37:03.235262  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b36b1e46732bafa997c4e66766a4bb0cd5ea7487006b7a6ba9e5860f1743a6e"
	I0927 00:37:03.286098  559927 logs.go:123] Gathering logs for coredns [1a8d7c13a871933275d3e84e87c063e55c9ed4adff23be36d5ea4bfa8accbcd6] ...
	I0927 00:37:03.286134  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a8d7c13a871933275d3e84e87c063e55c9ed4adff23be36d5ea4bfa8accbcd6"
	I0927 00:37:03.330375  559927 logs.go:123] Gathering logs for CRI-O ...
	I0927 00:37:03.330463  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 00:37:03.436949  559927 out.go:358] Setting ErrFile to fd 2...
	I0927 00:37:03.436986  559927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 00:37:03.437055  559927 out.go:270] X Problems detected in kubelet:
	W0927 00:37:03.437072  559927 out.go:270]   Sep 27 00:34:35 addons-220192 kubelet[1511]: E0927 00:34:35.916217    1511 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-220192\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-220192' and this object" logger="UnhandledError"
	W0927 00:37:03.437086  559927 out.go:270]   Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.200959    1511 projected.go:194] Error preparing data for projected volume kube-api-access-8sq56 for pod kube-system/kindnet-4rr4t: [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0927 00:37:03.437094  559927 out.go:270]   Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.201058    1511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/afd40f83-7a79-4edc-bbfc-ff6936a3158e-kube-api-access-8sq56 podName:afd40f83-7a79-4edc-bbfc-ff6936a3158e nodeName:}" failed. No retries permitted until 2024-09-27 00:34:37.70103236 +0000 UTC m=+8.728897654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8sq56" (UniqueName: "kubernetes.io/projected/afd40f83-7a79-4edc-bbfc-ff6936a3158e-kube-api-access-8sq56") pod "kindnet-4rr4t" (UID: "afd40f83-7a79-4edc-bbfc-ff6936a3158e") : [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0927 00:37:03.437105  559927 out.go:270]   Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.420880    1511 projected.go:194] Error preparing data for projected volume kube-api-access-pfjql for pod kube-system/kube-proxy-shqd9: [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0927 00:37:03.437111  559927 out.go:270]   Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.420948    1511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/476cb0de-772b-4e25-ac8c-7244a6d392e7-kube-api-access-pfjql podName:476cb0de-772b-4e25-ac8c-7244a6d392e7 nodeName:}" failed. No retries permitted until 2024-09-27 00:34:37.920927906 +0000 UTC m=+8.948793201 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pfjql" (UniqueName: "kubernetes.io/projected/476cb0de-772b-4e25-ac8c-7244a6d392e7-kube-api-access-pfjql") pod "kube-proxy-shqd9" (UID: "476cb0de-772b-4e25-ac8c-7244a6d392e7") : [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	I0927 00:37:03.437117  559927 out.go:358] Setting ErrFile to fd 2...
	I0927 00:37:03.437124  559927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:37:13.449086  559927 system_pods.go:59] 18 kube-system pods found
	I0927 00:37:13.449126  559927 system_pods.go:61] "coredns-7c65d6cfc9-wnhpd" [4f3b2231-030c-4af9-beae-7c98c13d01cd] Running
	I0927 00:37:13.449134  559927 system_pods.go:61] "csi-hostpath-attacher-0" [c49fd5b5-341f-441f-981c-70e3f7bccbff] Running
	I0927 00:37:13.449139  559927 system_pods.go:61] "csi-hostpath-resizer-0" [21888ecf-1320-496d-97d5-a0c1e85ce981] Running
	I0927 00:37:13.449143  559927 system_pods.go:61] "csi-hostpathplugin-pst4l" [ae3ecba5-af16-41fb-a4c3-bf2c43689e50] Running
	I0927 00:37:13.449148  559927 system_pods.go:61] "etcd-addons-220192" [94827fa0-c442-4e24-a83e-22de3bff65e3] Running
	I0927 00:37:13.449152  559927 system_pods.go:61] "kindnet-4rr4t" [afd40f83-7a79-4edc-bbfc-ff6936a3158e] Running
	I0927 00:37:13.449157  559927 system_pods.go:61] "kube-apiserver-addons-220192" [0bec6c78-990c-4ffb-be43-dfb155b147f7] Running
	I0927 00:37:13.449161  559927 system_pods.go:61] "kube-controller-manager-addons-220192" [1353546b-84d9-4cd3-938e-6734b6b3413b] Running
	I0927 00:37:13.449172  559927 system_pods.go:61] "kube-ingress-dns-minikube" [586c242e-8199-4142-985e-e89f7d01e3cc] Running
	I0927 00:37:13.449178  559927 system_pods.go:61] "kube-proxy-shqd9" [476cb0de-772b-4e25-ac8c-7244a6d392e7] Running
	I0927 00:37:13.449186  559927 system_pods.go:61] "kube-scheduler-addons-220192" [c391b3f7-ca7f-48e9-9cec-7188a266035f] Running
	I0927 00:37:13.449190  559927 system_pods.go:61] "metrics-server-84c5f94fbc-zpbj2" [1a96d0d6-2c40-4cd4-ba04-605e67d179f7] Running
	I0927 00:37:13.449195  559927 system_pods.go:61] "nvidia-device-plugin-daemonset-dqrvw" [e6729774-57a9-49c2-a405-b1a541551dd4] Running
	I0927 00:37:13.449199  559927 system_pods.go:61] "registry-66c9cd494c-7997r" [06852bd1-3230-4615-b6a1-8834e426e02d] Running
	I0927 00:37:13.449203  559927 system_pods.go:61] "registry-proxy-ld2gg" [44a3013c-bbfc-4d08-9ed4-a5160422cdf0] Running
	I0927 00:37:13.449210  559927 system_pods.go:61] "snapshot-controller-56fcc65765-b4j5p" [de8a8d5b-ab34-41cb-ac84-b1c9dd58a1ff] Running
	I0927 00:37:13.449215  559927 system_pods.go:61] "snapshot-controller-56fcc65765-w6xf7" [e8e9ea4c-ac11-4dc7-85aa-75c8b2eb463e] Running
	I0927 00:37:13.449221  559927 system_pods.go:61] "storage-provisioner" [20b521d2-cf72-4c64-997c-c30b932659a1] Running
	I0927 00:37:13.449227  559927 system_pods.go:74] duration metric: took 11.113013969s to wait for pod list to return data ...
	I0927 00:37:13.449235  559927 default_sa.go:34] waiting for default service account to be created ...
	I0927 00:37:13.451765  559927 default_sa.go:45] found service account: "default"
	I0927 00:37:13.451791  559927 default_sa.go:55] duration metric: took 2.546967ms for default service account to be created ...
	I0927 00:37:13.451801  559927 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 00:37:13.461994  559927 system_pods.go:86] 18 kube-system pods found
	I0927 00:37:13.462032  559927 system_pods.go:89] "coredns-7c65d6cfc9-wnhpd" [4f3b2231-030c-4af9-beae-7c98c13d01cd] Running
	I0927 00:37:13.462039  559927 system_pods.go:89] "csi-hostpath-attacher-0" [c49fd5b5-341f-441f-981c-70e3f7bccbff] Running
	I0927 00:37:13.462045  559927 system_pods.go:89] "csi-hostpath-resizer-0" [21888ecf-1320-496d-97d5-a0c1e85ce981] Running
	I0927 00:37:13.462050  559927 system_pods.go:89] "csi-hostpathplugin-pst4l" [ae3ecba5-af16-41fb-a4c3-bf2c43689e50] Running
	I0927 00:37:13.462054  559927 system_pods.go:89] "etcd-addons-220192" [94827fa0-c442-4e24-a83e-22de3bff65e3] Running
	I0927 00:37:13.462059  559927 system_pods.go:89] "kindnet-4rr4t" [afd40f83-7a79-4edc-bbfc-ff6936a3158e] Running
	I0927 00:37:13.462063  559927 system_pods.go:89] "kube-apiserver-addons-220192" [0bec6c78-990c-4ffb-be43-dfb155b147f7] Running
	I0927 00:37:13.462091  559927 system_pods.go:89] "kube-controller-manager-addons-220192" [1353546b-84d9-4cd3-938e-6734b6b3413b] Running
	I0927 00:37:13.462098  559927 system_pods.go:89] "kube-ingress-dns-minikube" [586c242e-8199-4142-985e-e89f7d01e3cc] Running
	I0927 00:37:13.462112  559927 system_pods.go:89] "kube-proxy-shqd9" [476cb0de-772b-4e25-ac8c-7244a6d392e7] Running
	I0927 00:37:13.462117  559927 system_pods.go:89] "kube-scheduler-addons-220192" [c391b3f7-ca7f-48e9-9cec-7188a266035f] Running
	I0927 00:37:13.462121  559927 system_pods.go:89] "metrics-server-84c5f94fbc-zpbj2" [1a96d0d6-2c40-4cd4-ba04-605e67d179f7] Running
	I0927 00:37:13.462131  559927 system_pods.go:89] "nvidia-device-plugin-daemonset-dqrvw" [e6729774-57a9-49c2-a405-b1a541551dd4] Running
	I0927 00:37:13.462136  559927 system_pods.go:89] "registry-66c9cd494c-7997r" [06852bd1-3230-4615-b6a1-8834e426e02d] Running
	I0927 00:37:13.462142  559927 system_pods.go:89] "registry-proxy-ld2gg" [44a3013c-bbfc-4d08-9ed4-a5160422cdf0] Running
	I0927 00:37:13.462149  559927 system_pods.go:89] "snapshot-controller-56fcc65765-b4j5p" [de8a8d5b-ab34-41cb-ac84-b1c9dd58a1ff] Running
	I0927 00:37:13.462179  559927 system_pods.go:89] "snapshot-controller-56fcc65765-w6xf7" [e8e9ea4c-ac11-4dc7-85aa-75c8b2eb463e] Running
	I0927 00:37:13.462189  559927 system_pods.go:89] "storage-provisioner" [20b521d2-cf72-4c64-997c-c30b932659a1] Running
	I0927 00:37:13.462197  559927 system_pods.go:126] duration metric: took 10.389744ms to wait for k8s-apps to be running ...
	I0927 00:37:13.462204  559927 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 00:37:13.462274  559927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 00:37:13.475870  559927 system_svc.go:56] duration metric: took 13.657024ms WaitForService to wait for kubelet
	I0927 00:37:13.475900  559927 kubeadm.go:582] duration metric: took 2m40.094897458s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 00:37:13.475921  559927 node_conditions.go:102] verifying NodePressure condition ...
	I0927 00:37:13.479550  559927 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0927 00:37:13.479579  559927 node_conditions.go:123] node cpu capacity is 2
	I0927 00:37:13.479592  559927 node_conditions.go:105] duration metric: took 3.664619ms to run NodePressure ...
	I0927 00:37:13.479604  559927 start.go:241] waiting for startup goroutines ...
	I0927 00:37:13.479611  559927 start.go:246] waiting for cluster config update ...
	I0927 00:37:13.479628  559927 start.go:255] writing updated cluster config ...
	I0927 00:37:13.479920  559927 ssh_runner.go:195] Run: rm -f paused
	I0927 00:37:13.906550  559927 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0927 00:37:13.908395  559927 out.go:177] * Done! kubectl is now configured to use "addons-220192" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 27 00:49:03 addons-220192 crio[964]: time="2024-09-27 00:49:03.464261026Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 27 00:49:03 addons-220192 crio[964]: time="2024-09-27 00:49:03.487103670Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/a586c7918a83e6f2a6e737a40ac7b7a97b9be55ff3cc4168982217d41437c6cc/merged/etc/passwd: no such file or directory"
	Sep 27 00:49:03 addons-220192 crio[964]: time="2024-09-27 00:49:03.487149429Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/a586c7918a83e6f2a6e737a40ac7b7a97b9be55ff3cc4168982217d41437c6cc/merged/etc/group: no such file or directory"
	Sep 27 00:49:03 addons-220192 crio[964]: time="2024-09-27 00:49:03.526179520Z" level=info msg="Created container 777cf3576774fd0170d7c7da21ebdd48e80bc54b5c5f1aa877284e3b434b07e8: default/hello-world-app-55bf9c44b4-4f9hl/hello-world-app" id=04c8fcaf-b43f-4f77-9c44-8c562d0e6ef3 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 27 00:49:03 addons-220192 crio[964]: time="2024-09-27 00:49:03.526753119Z" level=info msg="Starting container: 777cf3576774fd0170d7c7da21ebdd48e80bc54b5c5f1aa877284e3b434b07e8" id=1817a723-295a-4197-97cc-07766489bf15 name=/runtime.v1.RuntimeService/StartContainer
	Sep 27 00:49:03 addons-220192 crio[964]: time="2024-09-27 00:49:03.538132834Z" level=info msg="Started container" PID=8143 containerID=777cf3576774fd0170d7c7da21ebdd48e80bc54b5c5f1aa877284e3b434b07e8 description=default/hello-world-app-55bf9c44b4-4f9hl/hello-world-app id=1817a723-295a-4197-97cc-07766489bf15 name=/runtime.v1.RuntimeService/StartContainer sandboxID=31b2ecd44b5d31e933d2c94dff63fa15f4fa4b1bc16bd0f0b5ee33752f142906
	Sep 27 00:49:04 addons-220192 crio[964]: time="2024-09-27 00:49:04.052247850Z" level=info msg="Removing container: 17b4809fb1c3ec9a3a179dff510097d782a05eea31cdf9f4193e00ca7bbe1420" id=cebb30b4-4f2a-4bdc-a1ec-c06b9515ea60 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 27 00:49:04 addons-220192 crio[964]: time="2024-09-27 00:49:04.078308577Z" level=info msg="Removed container 17b4809fb1c3ec9a3a179dff510097d782a05eea31cdf9f4193e00ca7bbe1420: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=cebb30b4-4f2a-4bdc-a1ec-c06b9515ea60 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 27 00:49:05 addons-220192 crio[964]: time="2024-09-27 00:49:05.761470972Z" level=info msg="Stopping container: f353e2f491f9178f141af43c8bb8e65bbcf4d9d6e54f0d37e710a0c7a4245bb8 (timeout: 2s)" id=d354de6a-28f6-4f84-8195-746b818ffc4c name=/runtime.v1.RuntimeService/StopContainer
	Sep 27 00:49:07 addons-220192 crio[964]: time="2024-09-27 00:49:07.074041165Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9df7e756-b70a-43a1-a54d-ab3f74c6524a name=/runtime.v1.ImageService/ImageStatus
	Sep 27 00:49:07 addons-220192 crio[964]: time="2024-09-27 00:49:07.074268705Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=9df7e756-b70a-43a1-a54d-ab3f74c6524a name=/runtime.v1.ImageService/ImageStatus
	Sep 27 00:49:07 addons-220192 crio[964]: time="2024-09-27 00:49:07.767511076Z" level=warning msg="Stopping container f353e2f491f9178f141af43c8bb8e65bbcf4d9d6e54f0d37e710a0c7a4245bb8 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=d354de6a-28f6-4f84-8195-746b818ffc4c name=/runtime.v1.RuntimeService/StopContainer
	Sep 27 00:49:07 addons-220192 conmon[4733]: conmon f353e2f491f9178f141a <ninfo>: container 4745 exited with status 137
	Sep 27 00:49:07 addons-220192 crio[964]: time="2024-09-27 00:49:07.906875317Z" level=info msg="Stopped container f353e2f491f9178f141af43c8bb8e65bbcf4d9d6e54f0d37e710a0c7a4245bb8: ingress-nginx/ingress-nginx-controller-bc57996ff-45pzp/controller" id=d354de6a-28f6-4f84-8195-746b818ffc4c name=/runtime.v1.RuntimeService/StopContainer
	Sep 27 00:49:07 addons-220192 crio[964]: time="2024-09-27 00:49:07.907434976Z" level=info msg="Stopping pod sandbox: 073f37da810c00d50798f407ea11ea59eee1836091c924c44a907ccfdcc9d5af" id=1d1a9419-b1d5-4c13-9b69-2fc7913e36ab name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 27 00:49:07 addons-220192 crio[964]: time="2024-09-27 00:49:07.911157950Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-E3QMHEKJZBS65DU3 - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-UXDZFR3VWVQ2MCK3 - [0:0]\n-X KUBE-HP-UXDZFR3VWVQ2MCK3\n-X KUBE-HP-E3QMHEKJZBS65DU3\nCOMMIT\n"
	Sep 27 00:49:07 addons-220192 crio[964]: time="2024-09-27 00:49:07.913686687Z" level=info msg="Closing host port tcp:80"
	Sep 27 00:49:07 addons-220192 crio[964]: time="2024-09-27 00:49:07.913731732Z" level=info msg="Closing host port tcp:443"
	Sep 27 00:49:07 addons-220192 crio[964]: time="2024-09-27 00:49:07.915022730Z" level=info msg="Host port tcp:80 does not have an open socket"
	Sep 27 00:49:07 addons-220192 crio[964]: time="2024-09-27 00:49:07.915047747Z" level=info msg="Host port tcp:443 does not have an open socket"
	Sep 27 00:49:07 addons-220192 crio[964]: time="2024-09-27 00:49:07.915228362Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-bc57996ff-45pzp Namespace:ingress-nginx ID:073f37da810c00d50798f407ea11ea59eee1836091c924c44a907ccfdcc9d5af UID:ff0367ce-f147-4be6-bb10-f3c7976bbc1a NetNS:/var/run/netns/2635033d-2954-4b6a-929f-b363c74ec1b7 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 27 00:49:07 addons-220192 crio[964]: time="2024-09-27 00:49:07.915369257Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-bc57996ff-45pzp from CNI network \"kindnet\" (type=ptp)"
	Sep 27 00:49:07 addons-220192 crio[964]: time="2024-09-27 00:49:07.935291863Z" level=info msg="Stopped pod sandbox: 073f37da810c00d50798f407ea11ea59eee1836091c924c44a907ccfdcc9d5af" id=1d1a9419-b1d5-4c13-9b69-2fc7913e36ab name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 27 00:49:08 addons-220192 crio[964]: time="2024-09-27 00:49:08.069057949Z" level=info msg="Removing container: f353e2f491f9178f141af43c8bb8e65bbcf4d9d6e54f0d37e710a0c7a4245bb8" id=ce5bf65f-e894-4126-ba2e-ecaa4a21af2a name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 27 00:49:08 addons-220192 crio[964]: time="2024-09-27 00:49:08.094520610Z" level=info msg="Removed container f353e2f491f9178f141af43c8bb8e65bbcf4d9d6e54f0d37e710a0c7a4245bb8: ingress-nginx/ingress-nginx-controller-bc57996ff-45pzp/controller" id=ce5bf65f-e894-4126-ba2e-ecaa4a21af2a name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	777cf3576774f       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        9 seconds ago       Running             hello-world-app            0                   31b2ecd44b5d3       hello-world-app-55bf9c44b4-4f9hl
	deef59e3d12a1       docker.io/library/nginx@sha256:19db381c08a95b2040d5637a65c7a59af6c2f21444b0c8730505280a0255fb53                              2 minutes ago       Running             nginx                      0                   a2699501fe7b9       nginx
	f79bc824b8278       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69                 13 minutes ago      Running             gcp-auth                   0                   c3d022a3b14c6       gcp-auth-89d5ffd79-6m9rp
	7d205c93f0684       nvcr.io/nvidia/k8s-device-plugin@sha256:cdd05f9d89f0552478d46474005e86b98795ad364664f644225b99d94978e680                     13 minutes ago      Running             nvidia-device-plugin-ctr   0                   a3d67546ff1f7       nvidia-device-plugin-daemonset-dqrvw
	32362458a9252       420193b27261a8d37b9fb1faeed45094cefa47e72a7538fd5a6c05e8b5ce192e                                                             13 minutes ago      Exited              patch                      2                   4e77855fde36c       ingress-nginx-admission-patch-rbwjb
	b41d6538fc0e2       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:7c4c1a6ca8855c524a64983eaf590e126a669ae12df83ad65de281c9beee13d3   13 minutes ago      Exited              create                     0                   749067cfde9c6       ingress-nginx-admission-create-cp22f
	fc011aec16aeb       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                              13 minutes ago      Running             yakd                       0                   af034f9d51002       yakd-dashboard-67d98fc6b-rxkjm
	bc524d9595882       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98             13 minutes ago      Running             local-path-provisioner     0                   ec2cf1c475ba2       local-path-provisioner-86d989889c-7czzf
	4856201f50285       gcr.io/cloud-spanner-emulator/emulator@sha256:6ce1265c73355797b34d2531c7146eed3996346f860517e35d1434182eb5f01d               13 minutes ago      Running             cloud-spanner-emulator     0                   44bcf8e3d7877       cloud-spanner-emulator-5b584cc74-4hjb6
	880e241766c14       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f        13 minutes ago      Running             metrics-server             0                   8cbcf8b4931cd       metrics-server-84c5f94fbc-zpbj2
	75b98e47380ef       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             13 minutes ago      Running             storage-provisioner        0                   794276bcaa01b       storage-provisioner
	1a8d7c13a8719       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                             13 minutes ago      Running             coredns                    0                   ef54c3fa3cd28       coredns-7c65d6cfc9-wnhpd
	5e3fe54c99e93       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d                                                             14 minutes ago      Running             kube-proxy                 0                   16758e5c05deb       kube-proxy-shqd9
	d7a7261efecf3       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51                                                             14 minutes ago      Running             kindnet-cni                0                   39c54e6136da4       kindnet-4rr4t
	04b9c719c715f       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853                                                             14 minutes ago      Running             kube-apiserver             0                   e263f38ae3b5e       kube-apiserver-addons-220192
	555dc55ff545e       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d                                                             14 minutes ago      Running             kube-scheduler             0                   e432a0cbdf14f       kube-scheduler-addons-220192
	2bfc8d78fdf58       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e                                                             14 minutes ago      Running             kube-controller-manager    0                   75ef397915466       kube-controller-manager-addons-220192
	6b36b1e46732b       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                             14 minutes ago      Running             etcd                       0                   8a08dc7f6d87c       etcd-addons-220192
	
	
	==> coredns [1a8d7c13a871933275d3e84e87c063e55c9ed4adff23be36d5ea4bfa8accbcd6] <==
	[INFO] 10.244.0.17:32921 - 15145 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00009599s
	[INFO] 10.244.0.17:32921 - 19537 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002489894s
	[INFO] 10.244.0.17:32921 - 61082 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002491617s
	[INFO] 10.244.0.17:32921 - 31100 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000128301s
	[INFO] 10.244.0.17:32921 - 35939 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000126651s
	[INFO] 10.244.0.17:41730 - 50927 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000109577s
	[INFO] 10.244.0.17:41730 - 51164 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000183225s
	[INFO] 10.244.0.17:33425 - 39515 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000088917s
	[INFO] 10.244.0.17:33425 - 39334 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000158479s
	[INFO] 10.244.0.17:42680 - 3435 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000165895s
	[INFO] 10.244.0.17:42680 - 3246 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000204483s
	[INFO] 10.244.0.17:41066 - 45139 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001539254s
	[INFO] 10.244.0.17:41066 - 44967 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001594653s
	[INFO] 10.244.0.17:35895 - 35537 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000064679s
	[INFO] 10.244.0.17:35895 - 35134 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000060282s
	[INFO] 10.244.0.20:38814 - 12571 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000166667s
	[INFO] 10.244.0.20:57837 - 31175 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000084199s
	[INFO] 10.244.0.20:59015 - 52667 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000144571s
	[INFO] 10.244.0.20:43948 - 22611 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000081081s
	[INFO] 10.244.0.20:39471 - 5951 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000114837s
	[INFO] 10.244.0.20:53453 - 53244 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000079014s
	[INFO] 10.244.0.20:50375 - 42686 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002646412s
	[INFO] 10.244.0.20:38002 - 62070 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002044169s
	[INFO] 10.244.0.20:54992 - 48913 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001395109s
	[INFO] 10.244.0.20:42555 - 4765 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002338735s
	
	
	==> describe nodes <==
	Name:               addons-220192
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-220192
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=addons-220192
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_27T00_34_30_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-220192
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:34:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-220192
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 00:49:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 00:47:03 +0000   Fri, 27 Sep 2024 00:34:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 00:47:03 +0000   Fri, 27 Sep 2024 00:34:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 00:47:03 +0000   Fri, 27 Sep 2024 00:34:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 00:47:03 +0000   Fri, 27 Sep 2024 00:35:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-220192
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 6db0b236675141869357d8bd6acda62f
	  System UUID:                96d22be3-917a-4ba2-9d29-91009fed055d
	  Boot ID:                    7df4580f-f941-474d-8050-3bbd7f78d321
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     cloud-spanner-emulator-5b584cc74-4hjb6     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  default                     hello-world-app-55bf9c44b4-4f9hl           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m31s
	  gcp-auth                    gcp-auth-89d5ffd79-6m9rp                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 coredns-7c65d6cfc9-wnhpd                   100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 etcd-addons-220192                         100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-4rr4t                              100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-addons-220192               250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-addons-220192      200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-shqd9                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-addons-220192               100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 metrics-server-84c5f94fbc-zpbj2            100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         14m
	  kube-system                 nvidia-device-plugin-daemonset-dqrvw       0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  local-path-storage          local-path-provisioner-86d989889c-7czzf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-rxkjm             0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             548Mi (6%)  476Mi (6%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 14m   kube-proxy       
	  Normal   Starting                 14m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 14m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  14m   kubelet          Node addons-220192 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m   kubelet          Node addons-220192 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m   kubelet          Node addons-220192 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           14m   node-controller  Node addons-220192 event: Registered Node addons-220192 in Controller
	  Normal   NodeReady                13m   kubelet          Node addons-220192 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep26 22:08] overlayfs: '/var/lib/containers/storage/overlay/l/ZLTOCNGE2IGM6DT7VP2QP7OV3M' not a directory
	[  +0.694148] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	[Sep27 00:06] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [6b36b1e46732bafa997c4e66766a4bb0cd5ea7487006b7a6ba9e5860f1743a6e] <==
	{"level":"warn","ts":"2024-09-27T00:34:36.642115Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.300727ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kindnet\" ","response":"range_response_count:1 size:4619"}
	{"level":"info","ts":"2024-09-27T00:34:36.655610Z","caller":"traceutil/trace.go:171","msg":"trace[277343402] range","detail":"{range_begin:/registry/daemonsets/kube-system/kindnet; range_end:; response_count:1; response_revision:336; }","duration":"117.031913ms","start":"2024-09-27T00:34:36.538562Z","end":"2024-09-27T00:34:36.655594Z","steps":["trace[277343402] 'agreement among raft nodes before linearized reading'  (duration: 68.259623ms)","trace[277343402] 'range keys from in-memory index tree'  (duration: 35.006627ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-27T00:34:36.705293Z","caller":"traceutil/trace.go:171","msg":"trace[706582313] transaction","detail":"{read_only:false; response_revision:338; number_of_response:1; }","duration":"123.045604ms","start":"2024-09-27T00:34:36.582228Z","end":"2024-09-27T00:34:36.705274Z","steps":["trace[706582313] 'process raft request'  (duration: 120.583492ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:34:36.705678Z","caller":"traceutil/trace.go:171","msg":"trace[75754528] transaction","detail":"{read_only:false; response_revision:337; number_of_response:1; }","duration":"123.945357ms","start":"2024-09-27T00:34:36.581722Z","end":"2024-09-27T00:34:36.705667Z","steps":["trace[75754528] 'process raft request'  (duration: 83.586816ms)","trace[75754528] 'compare'  (duration: 37.390308ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-27T00:34:36.707643Z","caller":"traceutil/trace.go:171","msg":"trace[1978378721] transaction","detail":"{read_only:false; response_revision:340; number_of_response:1; }","duration":"119.988454ms","start":"2024-09-27T00:34:36.587640Z","end":"2024-09-27T00:34:36.707629Z","steps":["trace[1978378721] 'process raft request'  (duration: 115.241317ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:34:36.707788Z","caller":"traceutil/trace.go:171","msg":"trace[245549885] transaction","detail":"{read_only:false; response_revision:339; number_of_response:1; }","duration":"125.39105ms","start":"2024-09-27T00:34:36.582391Z","end":"2024-09-27T00:34:36.707782Z","steps":["trace[245549885] 'process raft request'  (duration: 120.456628ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:34:36.708110Z","caller":"traceutil/trace.go:171","msg":"trace[386138567] transaction","detail":"{read_only:false; response_revision:341; number_of_response:1; }","duration":"101.159781ms","start":"2024-09-27T00:34:36.606943Z","end":"2024-09-27T00:34:36.708103Z","steps":["trace[386138567] 'process raft request'  (duration: 95.968996ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:34:36.708135Z","caller":"traceutil/trace.go:171","msg":"trace[1196894315] linearizableReadLoop","detail":"{readStateIndex:349; appliedIndex:344; }","duration":"118.831173ms","start":"2024-09-27T00:34:36.589299Z","end":"2024-09-27T00:34:36.708130Z","steps":["trace[1196894315] 'read index received'  (duration: 75.87577ms)","trace[1196894315] 'applied index is now lower than readState.Index'  (duration: 42.954746ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-27T00:34:36.708195Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"157.336367ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-09-27T00:34:36.761229Z","caller":"traceutil/trace.go:171","msg":"trace[1688764860] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:342; }","duration":"210.35389ms","start":"2024-09-27T00:34:36.550840Z","end":"2024-09-27T00:34:36.761194Z","steps":["trace[1688764860] 'agreement among raft nodes before linearized reading'  (duration: 157.305008ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:34:36.708247Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.11481ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T00:34:36.767505Z","caller":"traceutil/trace.go:171","msg":"trace[1525429030] range","detail":"{range_begin:/registry/resourcequotas; range_end:; response_count:0; response_revision:342; }","duration":"160.358433ms","start":"2024-09-27T00:34:36.607124Z","end":"2024-09-27T00:34:36.767483Z","steps":["trace[1525429030] 'agreement among raft nodes before linearized reading'  (duration: 101.104668ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:34:36.708321Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.890179ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2024-09-27T00:34:36.767882Z","caller":"traceutil/trace.go:171","msg":"trace[1578121818] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:342; }","duration":"212.443063ms","start":"2024-09-27T00:34:36.555427Z","end":"2024-09-27T00:34:36.767870Z","steps":["trace[1578121818] 'agreement among raft nodes before linearized reading'  (duration: 152.878143ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:34:36.708269Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.946598ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T00:34:36.768392Z","caller":"traceutil/trace.go:171","msg":"trace[605102501] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:342; }","duration":"186.060789ms","start":"2024-09-27T00:34:36.582319Z","end":"2024-09-27T00:34:36.768380Z","steps":["trace[605102501] 'agreement among raft nodes before linearized reading'  (duration: 125.936571ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:34:36.708296Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.170953ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:3684"}
	{"level":"warn","ts":"2024-09-27T00:34:36.718091Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"167.195588ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-09-27T00:34:36.783455Z","caller":"traceutil/trace.go:171","msg":"trace[448706870] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:342; }","duration":"232.572784ms","start":"2024-09-27T00:34:36.550868Z","end":"2024-09-27T00:34:36.783441Z","steps":["trace[448706870] 'agreement among raft nodes before linearized reading'  (duration: 167.157426ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:34:36.784208Z","caller":"traceutil/trace.go:171","msg":"trace[557653940] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:342; }","duration":"202.077926ms","start":"2024-09-27T00:34:36.582121Z","end":"2024-09-27T00:34:36.784199Z","steps":["trace[557653940] 'agreement among raft nodes before linearized reading'  (duration: 126.155561ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:34:37.377526Z","caller":"traceutil/trace.go:171","msg":"trace[877986833] transaction","detail":"{read_only:false; response_revision:364; number_of_response:1; }","duration":"141.52327ms","start":"2024-09-27T00:34:37.235983Z","end":"2024-09-27T00:34:37.377506Z","steps":["trace[877986833] 'process raft request'  (duration: 50.41116ms)","trace[877986833] 'compare'  (duration: 90.99976ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-27T00:34:37.378234Z","caller":"traceutil/trace.go:171","msg":"trace[1370352228] transaction","detail":"{read_only:false; response_revision:365; number_of_response:1; }","duration":"141.929496ms","start":"2024-09-27T00:34:37.236293Z","end":"2024-09-27T00:34:37.378223Z","steps":["trace[1370352228] 'process raft request'  (duration: 141.866039ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:44:24.160820Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1524}
	{"level":"info","ts":"2024-09-27T00:44:24.194054Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1524,"took":"32.739963ms","hash":154592831,"current-db-size-bytes":6713344,"current-db-size":"6.7 MB","current-db-size-in-use-bytes":3227648,"current-db-size-in-use":"3.2 MB"}
	{"level":"info","ts":"2024-09-27T00:44:24.194100Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":154592831,"revision":1524,"compact-revision":-1}
	
	
	==> gcp-auth [f79bc824b8278bffc4be0ad3ad49df8f62945f0be7f07c2e7eba40dd9ed2637d] <==
	2024/09/27 00:36:10 GCP Auth Webhook started!
	2024/09/27 00:37:14 Ready to marshal response ...
	2024/09/27 00:37:14 Ready to write response ...
	2024/09/27 00:37:14 Ready to marshal response ...
	2024/09/27 00:37:14 Ready to write response ...
	2024/09/27 00:37:14 Ready to marshal response ...
	2024/09/27 00:37:14 Ready to write response ...
	2024/09/27 00:45:18 Ready to marshal response ...
	2024/09/27 00:45:18 Ready to write response ...
	2024/09/27 00:45:18 Ready to marshal response ...
	2024/09/27 00:45:18 Ready to write response ...
	2024/09/27 00:45:18 Ready to marshal response ...
	2024/09/27 00:45:18 Ready to write response ...
	2024/09/27 00:45:27 Ready to marshal response ...
	2024/09/27 00:45:27 Ready to write response ...
	2024/09/27 00:45:53 Ready to marshal response ...
	2024/09/27 00:45:53 Ready to write response ...
	2024/09/27 00:46:09 Ready to marshal response ...
	2024/09/27 00:46:09 Ready to write response ...
	2024/09/27 00:46:42 Ready to marshal response ...
	2024/09/27 00:46:42 Ready to write response ...
	2024/09/27 00:49:01 Ready to marshal response ...
	2024/09/27 00:49:01 Ready to write response ...
	
	
	==> kernel <==
	 00:49:13 up  4:31,  0 users,  load average: 0.23, 0.44, 1.10
	Linux addons-220192 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [d7a7261efecf3162ccc2d26ed432451c900af8b4d1487407d7ce2be5094281b5] <==
	I0927 00:47:08.619899       1 main.go:299] handling current node
	I0927 00:47:18.619438       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:47:18.619471       1 main.go:299] handling current node
	I0927 00:47:28.619862       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:47:28.619913       1 main.go:299] handling current node
	I0927 00:47:38.619486       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:47:38.619526       1 main.go:299] handling current node
	I0927 00:47:48.619326       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:47:48.619359       1 main.go:299] handling current node
	I0927 00:47:58.619907       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:47:58.619942       1 main.go:299] handling current node
	I0927 00:48:08.619286       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:48:08.619324       1 main.go:299] handling current node
	I0927 00:48:18.619233       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:48:18.619267       1 main.go:299] handling current node
	I0927 00:48:28.619953       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:48:28.619986       1 main.go:299] handling current node
	I0927 00:48:38.619501       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:48:38.619535       1 main.go:299] handling current node
	I0927 00:48:48.620071       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:48:48.620107       1 main.go:299] handling current node
	I0927 00:48:58.619786       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:48:58.619844       1 main.go:299] handling current node
	I0927 00:49:08.619918       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:49:08.619957       1 main.go:299] handling current node
	
	
	==> kube-apiserver [04b9c719c715f318e0da018097c22f147000bd0fb64d781731fa9eb3b3c51395] <==
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0927 00:36:39.626030       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.158.28:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.158.28:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.158.28:443: connect: connection refused" logger="UnhandledError"
	I0927 00:36:39.717054       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0927 00:45:18.452440       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.96.241"}
	I0927 00:46:03.777180       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0927 00:46:25.606817       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 00:46:25.606863       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0927 00:46:25.636258       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 00:46:25.636318       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0927 00:46:25.715476       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 00:46:25.715518       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0927 00:46:25.735524       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 00:46:25.735605       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0927 00:46:25.743258       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 00:46:25.743298       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0927 00:46:26.719243       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0927 00:46:26.744004       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0927 00:46:26.865842       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0927 00:46:36.881429       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0927 00:46:37.930581       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0927 00:46:42.463824       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0927 00:46:42.770878       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.223.22"}
	I0927 00:49:02.205306       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.7.117"}
	
	
	==> kube-controller-manager [2bfc8d78fdf58256c4a5925537af21cdbf3dbd66127f8a15b8101f92fb8a78c9] <==
	W0927 00:47:48.000084       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:47:48.000219       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:47:57.937777       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:47:57.937817       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:48:24.538645       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:48:24.538686       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:48:25.390877       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:48:25.390921       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:48:31.670773       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:48:31.670820       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:48:53.665842       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:48:53.665880       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0927 00:49:01.954630       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="38.508626ms"
	I0927 00:49:01.965161       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="10.362048ms"
	I0927 00:49:01.965554       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="39.712µs"
	I0927 00:49:01.971307       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="44.356µs"
	I0927 00:49:04.128808       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="28.590884ms"
	I0927 00:49:04.128886       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="46.235µs"
	I0927 00:49:04.737268       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0927 00:49:04.741850       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="6.203µs"
	I0927 00:49:04.753634       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	W0927 00:49:05.366254       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:49:05.366294       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:49:10.944544       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:49:10.944587       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [5e3fe54c99e931cc6b0b654e967a2638c30374abdabe2c1174d5f6a3fff11315] <==
	I0927 00:34:38.907788       1 server_linux.go:66] "Using iptables proxy"
	I0927 00:34:39.331001       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0927 00:34:39.331159       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 00:34:39.614187       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0927 00:34:39.614314       1 server_linux.go:169] "Using iptables Proxier"
	I0927 00:34:39.617555       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 00:34:39.625699       1 server.go:483] "Version info" version="v1.31.1"
	I0927 00:34:39.625787       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 00:34:39.645465       1 config.go:199] "Starting service config controller"
	I0927 00:34:39.650076       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 00:34:39.645886       1 config.go:105] "Starting endpoint slice config controller"
	I0927 00:34:39.650198       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 00:34:39.648423       1 config.go:328] "Starting node config controller"
	I0927 00:34:39.650407       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 00:34:39.750364       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 00:34:39.751607       1 shared_informer.go:320] Caches are synced for service config
	I0927 00:34:39.751679       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [555dc55ff545e165f45bde68c31f0843d0f21041ba3fea37def560aea920dcc5] <==
	W0927 00:34:26.563980       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0927 00:34:26.564047       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:34:26.564158       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0927 00:34:26.564555       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:34:26.564370       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0927 00:34:26.564682       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:34:26.564768       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0927 00:34:26.564871       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:34:26.564418       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0927 00:34:26.564995       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:34:26.564470       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0927 00:34:26.565087       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:34:26.564528       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0927 00:34:26.565193       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 00:34:27.425835       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0927 00:34:27.425963       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0927 00:34:27.457379       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0927 00:34:27.457505       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:34:27.578493       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0927 00:34:27.578645       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 00:34:27.626921       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0927 00:34:27.627048       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:34:27.640709       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0927 00:34:27.640830       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0927 00:34:29.347645       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 27 00:49:03 addons-220192 kubelet[1511]: I0927 00:49:03.329214    1511 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqjx7\" (UniqueName: \"kubernetes.io/projected/586c242e-8199-4142-985e-e89f7d01e3cc-kube-api-access-fqjx7\") pod \"586c242e-8199-4142-985e-e89f7d01e3cc\" (UID: \"586c242e-8199-4142-985e-e89f7d01e3cc\") "
	Sep 27 00:49:03 addons-220192 kubelet[1511]: I0927 00:49:03.337648    1511 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/586c242e-8199-4142-985e-e89f7d01e3cc-kube-api-access-fqjx7" (OuterVolumeSpecName: "kube-api-access-fqjx7") pod "586c242e-8199-4142-985e-e89f7d01e3cc" (UID: "586c242e-8199-4142-985e-e89f7d01e3cc"). InnerVolumeSpecName "kube-api-access-fqjx7". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 27 00:49:03 addons-220192 kubelet[1511]: I0927 00:49:03.429665    1511 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-fqjx7\" (UniqueName: \"kubernetes.io/projected/586c242e-8199-4142-985e-e89f7d01e3cc-kube-api-access-fqjx7\") on node \"addons-220192\" DevicePath \"\""
	Sep 27 00:49:04 addons-220192 kubelet[1511]: I0927 00:49:04.045022    1511 scope.go:117] "RemoveContainer" containerID="17b4809fb1c3ec9a3a179dff510097d782a05eea31cdf9f4193e00ca7bbe1420"
	Sep 27 00:49:04 addons-220192 kubelet[1511]: I0927 00:49:04.078819    1511 scope.go:117] "RemoveContainer" containerID="17b4809fb1c3ec9a3a179dff510097d782a05eea31cdf9f4193e00ca7bbe1420"
	Sep 27 00:49:04 addons-220192 kubelet[1511]: E0927 00:49:04.079346    1511 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17b4809fb1c3ec9a3a179dff510097d782a05eea31cdf9f4193e00ca7bbe1420\": container with ID starting with 17b4809fb1c3ec9a3a179dff510097d782a05eea31cdf9f4193e00ca7bbe1420 not found: ID does not exist" containerID="17b4809fb1c3ec9a3a179dff510097d782a05eea31cdf9f4193e00ca7bbe1420"
	Sep 27 00:49:04 addons-220192 kubelet[1511]: I0927 00:49:04.079381    1511 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17b4809fb1c3ec9a3a179dff510097d782a05eea31cdf9f4193e00ca7bbe1420"} err="failed to get container status \"17b4809fb1c3ec9a3a179dff510097d782a05eea31cdf9f4193e00ca7bbe1420\": rpc error: code = NotFound desc = could not find container \"17b4809fb1c3ec9a3a179dff510097d782a05eea31cdf9f4193e00ca7bbe1420\": container with ID starting with 17b4809fb1c3ec9a3a179dff510097d782a05eea31cdf9f4193e00ca7bbe1420 not found: ID does not exist"
	Sep 27 00:49:04 addons-220192 kubelet[1511]: I0927 00:49:04.750414    1511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-4f9hl" podStartSLOduration=2.605433627 podStartE2EDuration="3.75039368s" podCreationTimestamp="2024-09-27 00:49:01 +0000 UTC" firstStartedPulling="2024-09-27 00:49:02.31731313 +0000 UTC m=+873.345178425" lastFinishedPulling="2024-09-27 00:49:03.462273183 +0000 UTC m=+874.490138478" observedRunningTime="2024-09-27 00:49:04.101014946 +0000 UTC m=+875.128880298" watchObservedRunningTime="2024-09-27 00:49:04.75039368 +0000 UTC m=+875.778258975"
	Sep 27 00:49:05 addons-220192 kubelet[1511]: I0927 00:49:05.074638    1511 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="028ebe1e-2dc3-4432-a3d7-c0f9347b6701" path="/var/lib/kubelet/pods/028ebe1e-2dc3-4432-a3d7-c0f9347b6701/volumes"
	Sep 27 00:49:05 addons-220192 kubelet[1511]: I0927 00:49:05.075080    1511 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="586c242e-8199-4142-985e-e89f7d01e3cc" path="/var/lib/kubelet/pods/586c242e-8199-4142-985e-e89f7d01e3cc/volumes"
	Sep 27 00:49:05 addons-220192 kubelet[1511]: I0927 00:49:05.075427    1511 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e835d673-32f9-43e3-aff0-cd786d33a8ab" path="/var/lib/kubelet/pods/e835d673-32f9-43e3-aff0-cd786d33a8ab/volumes"
	Sep 27 00:49:07 addons-220192 kubelet[1511]: E0927 00:49:07.074846    1511 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="cb2a80ac-9ca0-4ac1-8260-ec32cfb893e8"
	Sep 27 00:49:08 addons-220192 kubelet[1511]: I0927 00:49:08.054888    1511 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ff0367ce-f147-4be6-bb10-f3c7976bbc1a-webhook-cert\") pod \"ff0367ce-f147-4be6-bb10-f3c7976bbc1a\" (UID: \"ff0367ce-f147-4be6-bb10-f3c7976bbc1a\") "
	Sep 27 00:49:08 addons-220192 kubelet[1511]: I0927 00:49:08.054964    1511 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bpmv\" (UniqueName: \"kubernetes.io/projected/ff0367ce-f147-4be6-bb10-f3c7976bbc1a-kube-api-access-7bpmv\") pod \"ff0367ce-f147-4be6-bb10-f3c7976bbc1a\" (UID: \"ff0367ce-f147-4be6-bb10-f3c7976bbc1a\") "
	Sep 27 00:49:08 addons-220192 kubelet[1511]: I0927 00:49:08.065459    1511 scope.go:117] "RemoveContainer" containerID="f353e2f491f9178f141af43c8bb8e65bbcf4d9d6e54f0d37e710a0c7a4245bb8"
	Sep 27 00:49:08 addons-220192 kubelet[1511]: I0927 00:49:08.070005    1511 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff0367ce-f147-4be6-bb10-f3c7976bbc1a-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "ff0367ce-f147-4be6-bb10-f3c7976bbc1a" (UID: "ff0367ce-f147-4be6-bb10-f3c7976bbc1a"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 27 00:49:08 addons-220192 kubelet[1511]: I0927 00:49:08.070737    1511 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff0367ce-f147-4be6-bb10-f3c7976bbc1a-kube-api-access-7bpmv" (OuterVolumeSpecName: "kube-api-access-7bpmv") pod "ff0367ce-f147-4be6-bb10-f3c7976bbc1a" (UID: "ff0367ce-f147-4be6-bb10-f3c7976bbc1a"). InnerVolumeSpecName "kube-api-access-7bpmv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 27 00:49:08 addons-220192 kubelet[1511]: I0927 00:49:08.095284    1511 scope.go:117] "RemoveContainer" containerID="f353e2f491f9178f141af43c8bb8e65bbcf4d9d6e54f0d37e710a0c7a4245bb8"
	Sep 27 00:49:08 addons-220192 kubelet[1511]: E0927 00:49:08.096050    1511 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f353e2f491f9178f141af43c8bb8e65bbcf4d9d6e54f0d37e710a0c7a4245bb8\": container with ID starting with f353e2f491f9178f141af43c8bb8e65bbcf4d9d6e54f0d37e710a0c7a4245bb8 not found: ID does not exist" containerID="f353e2f491f9178f141af43c8bb8e65bbcf4d9d6e54f0d37e710a0c7a4245bb8"
	Sep 27 00:49:08 addons-220192 kubelet[1511]: I0927 00:49:08.096085    1511 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f353e2f491f9178f141af43c8bb8e65bbcf4d9d6e54f0d37e710a0c7a4245bb8"} err="failed to get container status \"f353e2f491f9178f141af43c8bb8e65bbcf4d9d6e54f0d37e710a0c7a4245bb8\": rpc error: code = NotFound desc = could not find container \"f353e2f491f9178f141af43c8bb8e65bbcf4d9d6e54f0d37e710a0c7a4245bb8\": container with ID starting with f353e2f491f9178f141af43c8bb8e65bbcf4d9d6e54f0d37e710a0c7a4245bb8 not found: ID does not exist"
	Sep 27 00:49:08 addons-220192 kubelet[1511]: I0927 00:49:08.156147    1511 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-7bpmv\" (UniqueName: \"kubernetes.io/projected/ff0367ce-f147-4be6-bb10-f3c7976bbc1a-kube-api-access-7bpmv\") on node \"addons-220192\" DevicePath \"\""
	Sep 27 00:49:08 addons-220192 kubelet[1511]: I0927 00:49:08.156185    1511 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ff0367ce-f147-4be6-bb10-f3c7976bbc1a-webhook-cert\") on node \"addons-220192\" DevicePath \"\""
	Sep 27 00:49:09 addons-220192 kubelet[1511]: I0927 00:49:09.074294    1511 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff0367ce-f147-4be6-bb10-f3c7976bbc1a" path="/var/lib/kubelet/pods/ff0367ce-f147-4be6-bb10-f3c7976bbc1a/volumes"
	Sep 27 00:49:09 addons-220192 kubelet[1511]: E0927 00:49:09.388180    1511 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398149387945355,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:547957,},InodesUsed:&UInt64Value{Value:207,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:49:09 addons-220192 kubelet[1511]: E0927 00:49:09.388217    1511 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398149387945355,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:547957,},InodesUsed:&UInt64Value{Value:207,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [75b98e47380efba40cfb3e8a5003cf4e028dcd407cc6a050e8ed0e60a3c3168e] <==
	I0927 00:35:20.141906       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0927 00:35:20.155589       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0927 00:35:20.158853       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0927 00:35:20.168600       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0927 00:35:20.168906       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-220192_3340d466-8fff-465f-820a-19104d1219e9!
	I0927 00:35:20.169972       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"88317798-314a-4def-996f-d4666fa1d4d1", APIVersion:"v1", ResourceVersion:"910", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-220192_3340d466-8fff-465f-820a-19104d1219e9 became leader
	I0927 00:35:20.269123       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-220192_3340d466-8fff-465f-820a-19104d1219e9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-220192 -n addons-220192
helpers_test.go:261: (dbg) Run:  kubectl --context addons-220192 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-220192 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-220192 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-220192/192.168.49.2
	Start Time:       Fri, 27 Sep 2024 00:37:14 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.21
	IPs:
	  IP:  10.244.0.21
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lzqg5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lzqg5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  12m                  default-scheduler  Successfully assigned default/busybox to addons-220192
	  Normal   Pulling    10m (x4 over 12m)    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     10m (x4 over 12m)    kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     10m (x4 over 12m)    kubelet            Error: ErrImagePull
	  Warning  Failed     10m (x6 over 11m)    kubelet            Error: ImagePullBackOff
	  Normal   BackOff    107s (x43 over 11m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (152.19s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (357.8s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 3.082349ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-zpbj2" [1a96d0d6-2c40-4cd4-ba04-605e67d179f7] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004118346s
addons_test.go:413: (dbg) Run:  kubectl --context addons-220192 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-220192 top pods -n kube-system: exit status 1 (135.665508ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-wnhpd, age: 11m54.222535927s

                                                
                                                
** /stderr **
I0927 00:46:31.225337  559158 retry.go:31] will retry after 2.309668585s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-220192 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-220192 top pods -n kube-system: exit status 1 (87.198839ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-wnhpd, age: 11m56.619416917s

                                                
                                                
** /stderr **
I0927 00:46:33.622876  559158 retry.go:31] will retry after 5.672187991s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-220192 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-220192 top pods -n kube-system: exit status 1 (88.908444ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-wnhpd, age: 12m2.382670878s

                                                
                                                
** /stderr **
I0927 00:46:39.385885  559158 retry.go:31] will retry after 6.101624815s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-220192 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-220192 top pods -n kube-system: exit status 1 (89.841362ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-wnhpd, age: 12m8.575333891s

                                                
                                                
** /stderr **
I0927 00:46:45.578626  559158 retry.go:31] will retry after 13.664449895s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-220192 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-220192 top pods -n kube-system: exit status 1 (92.261179ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-wnhpd, age: 12m22.336652862s

                                                
                                                
** /stderr **
I0927 00:46:59.339645  559158 retry.go:31] will retry after 21.054047239s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-220192 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-220192 top pods -n kube-system: exit status 1 (89.113043ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-wnhpd, age: 12m43.484356741s

                                                
                                                
** /stderr **
I0927 00:47:20.488085  559158 retry.go:31] will retry after 18.765852342s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-220192 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-220192 top pods -n kube-system: exit status 1 (91.816472ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-wnhpd, age: 13m2.344714846s

                                                
                                                
** /stderr **
I0927 00:47:39.347756  559158 retry.go:31] will retry after 42.211001984s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-220192 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-220192 top pods -n kube-system: exit status 1 (84.07587ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-wnhpd, age: 13m44.640966861s

                                                
                                                
** /stderr **
I0927 00:48:21.643960  559158 retry.go:31] will retry after 1m6.847210859s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-220192 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-220192 top pods -n kube-system: exit status 1 (83.602372ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-wnhpd, age: 14m51.572242937s

                                                
                                                
** /stderr **
I0927 00:49:28.575301  559158 retry.go:31] will retry after 49.903110171s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-220192 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-220192 top pods -n kube-system: exit status 1 (88.376596ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-wnhpd, age: 15m41.563834562s

                                                
                                                
** /stderr **
I0927 00:50:18.567136  559158 retry.go:31] will retry after 1m0.683130161s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-220192 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-220192 top pods -n kube-system: exit status 1 (96.165095ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-wnhpd, age: 16m42.342688471s

                                                
                                                
** /stderr **
I0927 00:51:19.347320  559158 retry.go:31] will retry after 1m1.314992039s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-220192 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-220192 top pods -n kube-system: exit status 1 (85.679316ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-wnhpd, age: 17m43.747650206s

                                                
                                                
** /stderr **
addons_test.go:427: failed checking metric server: exit status 1
addons_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p addons-220192 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-220192
helpers_test.go:235: (dbg) docker inspect addons-220192:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d422e214370b2c42e3f8fefdb034ec6a32b66ac61da65610a7675682c1d93c9b",
	        "Created": "2024-09-27T00:34:02.077711994Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 560408,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-27T00:34:02.205411751Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:62002f6a97ad1f6cd4117c29b1c488a6bf3b6255c8231f0d600b1bc7ba1bcfd6",
	        "ResolvConfPath": "/var/lib/docker/containers/d422e214370b2c42e3f8fefdb034ec6a32b66ac61da65610a7675682c1d93c9b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d422e214370b2c42e3f8fefdb034ec6a32b66ac61da65610a7675682c1d93c9b/hostname",
	        "HostsPath": "/var/lib/docker/containers/d422e214370b2c42e3f8fefdb034ec6a32b66ac61da65610a7675682c1d93c9b/hosts",
	        "LogPath": "/var/lib/docker/containers/d422e214370b2c42e3f8fefdb034ec6a32b66ac61da65610a7675682c1d93c9b/d422e214370b2c42e3f8fefdb034ec6a32b66ac61da65610a7675682c1d93c9b-json.log",
	        "Name": "/addons-220192",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-220192:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-220192",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0793fd05507618b00e1cf7c9b3149e5680c33ad6255fa927fc31c2a001bb624a-init/diff:/var/lib/docker/overlay2/e55adca0cb8a4469e5ee8e2f29139ff0ae0fed3b714ff629d2562144f224236f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0793fd05507618b00e1cf7c9b3149e5680c33ad6255fa927fc31c2a001bb624a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0793fd05507618b00e1cf7c9b3149e5680c33ad6255fa927fc31c2a001bb624a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0793fd05507618b00e1cf7c9b3149e5680c33ad6255fa927fc31c2a001bb624a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-220192",
	                "Source": "/var/lib/docker/volumes/addons-220192/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-220192",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-220192",
	                "name.minikube.sigs.k8s.io": "addons-220192",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eb69f56da587fa8de40f3ac5f3f88f4566733f9673b58beb1d3e2d5b04e449e4",
	            "SandboxKey": "/var/run/docker/netns/eb69f56da587",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33501"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33502"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33505"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33503"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33504"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-220192": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "17b152e28b32de3f994213bf60b3fa21cfee26682153643fc3b71f12f405c393",
	                    "EndpointID": "8d6fe335b06a81d7595798770e72c7f67d0e3bb540d515a162969aad9ac12807",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-220192",
	                        "d422e214370b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-220192 -n addons-220192
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-220192 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-220192 logs -n 25: (1.493676733s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-763965                                                                     | download-only-763965   | jenkins | v1.34.0 | 27 Sep 24 00:33 UTC | 27 Sep 24 00:33 UTC |
	| start   | --download-only -p                                                                          | download-docker-575684 | jenkins | v1.34.0 | 27 Sep 24 00:33 UTC |                     |
	|         | download-docker-575684                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-575684                                                                   | download-docker-575684 | jenkins | v1.34.0 | 27 Sep 24 00:33 UTC | 27 Sep 24 00:33 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-878606   | jenkins | v1.34.0 | 27 Sep 24 00:33 UTC |                     |
	|         | binary-mirror-878606                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:39419                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-878606                                                                     | binary-mirror-878606   | jenkins | v1.34.0 | 27 Sep 24 00:33 UTC | 27 Sep 24 00:33 UTC |
	| addons  | disable dashboard -p                                                                        | addons-220192          | jenkins | v1.34.0 | 27 Sep 24 00:33 UTC |                     |
	|         | addons-220192                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-220192          | jenkins | v1.34.0 | 27 Sep 24 00:33 UTC |                     |
	|         | addons-220192                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-220192 --wait=true                                                                | addons-220192          | jenkins | v1.34.0 | 27 Sep 24 00:33 UTC | 27 Sep 24 00:37 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-220192          | jenkins | v1.34.0 | 27 Sep 24 00:45 UTC | 27 Sep 24 00:45 UTC |
	|         | -p addons-220192                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-220192 addons disable                                                                | addons-220192          | jenkins | v1.34.0 | 27 Sep 24 00:45 UTC | 27 Sep 24 00:45 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-220192 addons                                                                        | addons-220192          | jenkins | v1.34.0 | 27 Sep 24 00:46 UTC | 27 Sep 24 00:46 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-220192 addons                                                                        | addons-220192          | jenkins | v1.34.0 | 27 Sep 24 00:46 UTC | 27 Sep 24 00:46 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-220192 ip                                                                            | addons-220192          | jenkins | v1.34.0 | 27 Sep 24 00:46 UTC | 27 Sep 24 00:46 UTC |
	| addons  | addons-220192 addons disable                                                                | addons-220192          | jenkins | v1.34.0 | 27 Sep 24 00:46 UTC | 27 Sep 24 00:46 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-220192          | jenkins | v1.34.0 | 27 Sep 24 00:46 UTC | 27 Sep 24 00:46 UTC |
	|         | addons-220192                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-220192 ssh curl -s                                                                   | addons-220192          | jenkins | v1.34.0 | 27 Sep 24 00:46 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-220192 ip                                                                            | addons-220192          | jenkins | v1.34.0 | 27 Sep 24 00:49 UTC | 27 Sep 24 00:49 UTC |
	| addons  | addons-220192 addons disable                                                                | addons-220192          | jenkins | v1.34.0 | 27 Sep 24 00:49 UTC | 27 Sep 24 00:49 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-220192 addons disable                                                                | addons-220192          | jenkins | v1.34.0 | 27 Sep 24 00:49 UTC | 27 Sep 24 00:49 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-220192          | jenkins | v1.34.0 | 27 Sep 24 00:49 UTC | 27 Sep 24 00:49 UTC |
	|         | -p addons-220192                                                                            |                        |         |         |                     |                     |
	| addons  | addons-220192 addons disable                                                                | addons-220192          | jenkins | v1.34.0 | 27 Sep 24 00:49 UTC | 27 Sep 24 00:49 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| ssh     | addons-220192 ssh cat                                                                       | addons-220192          | jenkins | v1.34.0 | 27 Sep 24 00:49 UTC | 27 Sep 24 00:49 UTC |
	|         | /opt/local-path-provisioner/pvc-6c77448d-421d-4ba2-854e-92e4b80ec990_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-220192 addons disable                                                                | addons-220192          | jenkins | v1.34.0 | 27 Sep 24 00:49 UTC | 27 Sep 24 00:49 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-220192          | jenkins | v1.34.0 | 27 Sep 24 00:49 UTC | 27 Sep 24 00:49 UTC |
	|         | addons-220192                                                                               |                        |         |         |                     |                     |
	| addons  | addons-220192 addons                                                                        | addons-220192          | jenkins | v1.34.0 | 27 Sep 24 00:52 UTC | 27 Sep 24 00:52 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 00:33:38
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 00:33:38.065367  559927 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:33:38.065662  559927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:33:38.065684  559927 out.go:358] Setting ErrFile to fd 2...
	I0927 00:33:38.065691  559927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:33:38.066134  559927 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-553751/.minikube/bin
	I0927 00:33:38.067015  559927 out.go:352] Setting JSON to false
	I0927 00:33:38.067932  559927 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":15361,"bootTime":1727381857,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0927 00:33:38.068011  559927 start.go:139] virtualization:  
	I0927 00:33:38.070248  559927 out.go:177] * [addons-220192] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0927 00:33:38.071946  559927 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 00:33:38.071998  559927 notify.go:220] Checking for updates...
	I0927 00:33:38.075858  559927 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 00:33:38.077758  559927 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-553751/kubeconfig
	I0927 00:33:38.079450  559927 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-553751/.minikube
	I0927 00:33:38.081273  559927 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0927 00:33:38.082746  559927 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 00:33:38.084258  559927 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 00:33:38.110806  559927 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0927 00:33:38.110932  559927 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 00:33:38.175583  559927 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-27 00:33:38.165974566 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 00:33:38.175704  559927 docker.go:318] overlay module found
	I0927 00:33:38.178529  559927 out.go:177] * Using the docker driver based on user configuration
	I0927 00:33:38.179548  559927 start.go:297] selected driver: docker
	I0927 00:33:38.179564  559927 start.go:901] validating driver "docker" against <nil>
	I0927 00:33:38.179577  559927 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 00:33:38.180219  559927 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 00:33:38.238992  559927 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-27 00:33:38.229229626 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 00:33:38.239202  559927 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 00:33:38.239427  559927 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 00:33:38.240920  559927 out.go:177] * Using Docker driver with root privileges
	I0927 00:33:38.242287  559927 cni.go:84] Creating CNI manager for ""
	I0927 00:33:38.242357  559927 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0927 00:33:38.242365  559927 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0927 00:33:38.242444  559927 start.go:340] cluster config:
	{Name:addons-220192 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-220192 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:33:38.244624  559927 out.go:177] * Starting "addons-220192" primary control-plane node in "addons-220192" cluster
	I0927 00:33:38.245946  559927 cache.go:121] Beginning downloading kic base image for docker with crio
	I0927 00:33:38.247419  559927 out.go:177] * Pulling base image v0.0.45-1727108449-19696 ...
	I0927 00:33:38.248793  559927 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 00:33:38.248850  559927 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19711-553751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I0927 00:33:38.248878  559927 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0927 00:33:38.248883  559927 cache.go:56] Caching tarball of preloaded images
	I0927 00:33:38.248983  559927 preload.go:172] Found /home/jenkins/minikube-integration/19711-553751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0927 00:33:38.248995  559927 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 00:33:38.249334  559927 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/config.json ...
	I0927 00:33:38.249364  559927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/config.json: {Name:mkb4ce982f7db05f161e177b73decd3cb5d108a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:33:38.262886  559927 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0927 00:33:38.263010  559927 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory
	I0927 00:33:38.263042  559927 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory, skipping pull
	I0927 00:33:38.263053  559927 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 exists in cache, skipping pull
	I0927 00:33:38.263061  559927 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 as a tarball
	I0927 00:33:38.263070  559927 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 from local cache
	I0927 00:33:55.153743  559927 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 from cached tarball
	I0927 00:33:55.153786  559927 cache.go:194] Successfully downloaded all kic artifacts
	I0927 00:33:55.153817  559927 start.go:360] acquireMachinesLock for addons-220192: {Name:mk630666e0be44a920ddd2e3008b4121da78b597 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 00:33:55.153958  559927 start.go:364] duration metric: took 117.166µs to acquireMachinesLock for "addons-220192"
	I0927 00:33:55.153999  559927 start.go:93] Provisioning new machine with config: &{Name:addons-220192 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-220192 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 00:33:55.154087  559927 start.go:125] createHost starting for "" (driver="docker")
	I0927 00:33:55.156404  559927 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0927 00:33:55.156691  559927 start.go:159] libmachine.API.Create for "addons-220192" (driver="docker")
	I0927 00:33:55.156728  559927 client.go:168] LocalClient.Create starting
	I0927 00:33:55.156866  559927 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19711-553751/.minikube/certs/ca.pem
	I0927 00:33:55.366096  559927 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19711-553751/.minikube/certs/cert.pem
	I0927 00:33:55.869561  559927 cli_runner.go:164] Run: docker network inspect addons-220192 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0927 00:33:55.885619  559927 cli_runner.go:211] docker network inspect addons-220192 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0927 00:33:55.885722  559927 network_create.go:284] running [docker network inspect addons-220192] to gather additional debugging logs...
	I0927 00:33:55.885746  559927 cli_runner.go:164] Run: docker network inspect addons-220192
	W0927 00:33:55.900334  559927 cli_runner.go:211] docker network inspect addons-220192 returned with exit code 1
	I0927 00:33:55.900373  559927 network_create.go:287] error running [docker network inspect addons-220192]: docker network inspect addons-220192: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-220192 not found
	I0927 00:33:55.900388  559927 network_create.go:289] output of [docker network inspect addons-220192]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-220192 not found
	
	** /stderr **
	I0927 00:33:55.900485  559927 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0927 00:33:55.915597  559927 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001bf5250}
	I0927 00:33:55.915643  559927 network_create.go:124] attempt to create docker network addons-220192 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0927 00:33:55.915701  559927 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-220192 addons-220192
	I0927 00:33:55.980148  559927 network_create.go:108] docker network addons-220192 192.168.49.0/24 created
	I0927 00:33:55.980183  559927 kic.go:121] calculated static IP "192.168.49.2" for the "addons-220192" container
	I0927 00:33:55.980255  559927 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0927 00:33:55.992949  559927 cli_runner.go:164] Run: docker volume create addons-220192 --label name.minikube.sigs.k8s.io=addons-220192 --label created_by.minikube.sigs.k8s.io=true
	I0927 00:33:56.009754  559927 oci.go:103] Successfully created a docker volume addons-220192
	I0927 00:33:56.009852  559927 cli_runner.go:164] Run: docker run --rm --name addons-220192-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-220192 --entrypoint /usr/bin/test -v addons-220192:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -d /var/lib
	I0927 00:33:57.993052  559927 cli_runner.go:217] Completed: docker run --rm --name addons-220192-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-220192 --entrypoint /usr/bin/test -v addons-220192:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -d /var/lib: (1.983158106s)
	I0927 00:33:57.993080  559927 oci.go:107] Successfully prepared a docker volume addons-220192
	I0927 00:33:57.993109  559927 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 00:33:57.993128  559927 kic.go:194] Starting extracting preloaded images to volume ...
	I0927 00:33:57.993194  559927 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19711-553751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-220192:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -I lz4 -xf /preloaded.tar -C /extractDir
	I0927 00:34:02.014141  559927 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19711-553751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-220192:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -I lz4 -xf /preloaded.tar -C /extractDir: (4.020882938s)
	I0927 00:34:02.014176  559927 kic.go:203] duration metric: took 4.021043549s to extract preloaded images to volume ...
	W0927 00:34:02.014327  559927 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0927 00:34:02.014451  559927 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0927 00:34:02.064494  559927 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-220192 --name addons-220192 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-220192 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-220192 --network addons-220192 --ip 192.168.49.2 --volume addons-220192:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21
	I0927 00:34:02.388520  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Running}}
	I0927 00:34:02.409325  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:02.431602  559927 cli_runner.go:164] Run: docker exec addons-220192 stat /var/lib/dpkg/alternatives/iptables
	I0927 00:34:02.480602  559927 oci.go:144] the created container "addons-220192" has a running status.
	I0927 00:34:02.480633  559927 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa...
	I0927 00:34:03.617795  559927 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0927 00:34:03.637260  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:03.653027  559927 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0927 00:34:03.653052  559927 kic_runner.go:114] Args: [docker exec --privileged addons-220192 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0927 00:34:03.700155  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:03.717668  559927 machine.go:93] provisionDockerMachine start ...
	I0927 00:34:03.717764  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:03.733546  559927 main.go:141] libmachine: Using SSH client type: native
	I0927 00:34:03.733814  559927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33501 <nil> <nil>}
	I0927 00:34:03.733823  559927 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 00:34:03.862293  559927 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-220192
	
	I0927 00:34:03.862317  559927 ubuntu.go:169] provisioning hostname "addons-220192"
	I0927 00:34:03.862386  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:03.879096  559927 main.go:141] libmachine: Using SSH client type: native
	I0927 00:34:03.879355  559927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33501 <nil> <nil>}
	I0927 00:34:03.879374  559927 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-220192 && echo "addons-220192" | sudo tee /etc/hostname
	I0927 00:34:04.019276  559927 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-220192
	
	I0927 00:34:04.019405  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:04.036545  559927 main.go:141] libmachine: Using SSH client type: native
	I0927 00:34:04.036798  559927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33501 <nil> <nil>}
	I0927 00:34:04.036821  559927 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-220192' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-220192/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-220192' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 00:34:04.162591  559927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 00:34:04.162681  559927 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19711-553751/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-553751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-553751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-553751/.minikube}
	I0927 00:34:04.162739  559927 ubuntu.go:177] setting up certificates
	I0927 00:34:04.162769  559927 provision.go:84] configureAuth start
	I0927 00:34:04.162865  559927 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-220192
	I0927 00:34:04.179414  559927 provision.go:143] copyHostCerts
	I0927 00:34:04.179501  559927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-553751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-553751/.minikube/ca.pem (1078 bytes)
	I0927 00:34:04.179628  559927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-553751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-553751/.minikube/cert.pem (1123 bytes)
	I0927 00:34:04.179689  559927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-553751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-553751/.minikube/key.pem (1675 bytes)
	I0927 00:34:04.179747  559927 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-553751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-553751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-553751/.minikube/certs/ca-key.pem org=jenkins.addons-220192 san=[127.0.0.1 192.168.49.2 addons-220192 localhost minikube]
	I0927 00:34:04.940382  559927 provision.go:177] copyRemoteCerts
	I0927 00:34:04.940458  559927 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 00:34:04.940508  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:04.963981  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:05.060102  559927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 00:34:05.084207  559927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 00:34:05.107968  559927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0927 00:34:05.131460  559927 provision.go:87] duration metric: took 968.661896ms to configureAuth
	I0927 00:34:05.131489  559927 ubuntu.go:193] setting minikube options for container-runtime
	I0927 00:34:05.131682  559927 config.go:182] Loaded profile config "addons-220192": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:34:05.131795  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:05.148107  559927 main.go:141] libmachine: Using SSH client type: native
	I0927 00:34:05.148363  559927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33501 <nil> <nil>}
	I0927 00:34:05.148380  559927 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 00:34:05.367545  559927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 00:34:05.367569  559927 machine.go:96] duration metric: took 1.649879839s to provisionDockerMachine
	I0927 00:34:05.367581  559927 client.go:171] duration metric: took 10.210842557s to LocalClient.Create
	I0927 00:34:05.367593  559927 start.go:167] duration metric: took 10.210902338s to libmachine.API.Create "addons-220192"
	I0927 00:34:05.367601  559927 start.go:293] postStartSetup for "addons-220192" (driver="docker")
	I0927 00:34:05.367612  559927 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 00:34:05.367677  559927 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 00:34:05.367727  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:05.385055  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:05.479714  559927 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 00:34:05.483003  559927 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0927 00:34:05.483039  559927 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0927 00:34:05.483050  559927 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0927 00:34:05.483057  559927 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0927 00:34:05.483067  559927 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-553751/.minikube/addons for local assets ...
	I0927 00:34:05.483137  559927 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-553751/.minikube/files for local assets ...
	I0927 00:34:05.483165  559927 start.go:296] duration metric: took 115.558426ms for postStartSetup
	I0927 00:34:05.483490  559927 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-220192
	I0927 00:34:05.499440  559927 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/config.json ...
	I0927 00:34:05.499737  559927 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 00:34:05.499789  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:05.515159  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:05.603311  559927 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0927 00:34:05.607625  559927 start.go:128] duration metric: took 10.453518321s to createHost
	I0927 00:34:05.607654  559927 start.go:83] releasing machines lock for "addons-220192", held for 10.453681394s
	I0927 00:34:05.607730  559927 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-220192
	I0927 00:34:05.623821  559927 ssh_runner.go:195] Run: cat /version.json
	I0927 00:34:05.623878  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:05.623938  559927 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 00:34:05.624015  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:05.641153  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:05.648618  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:05.857953  559927 ssh_runner.go:195] Run: systemctl --version
	I0927 00:34:05.862287  559927 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 00:34:06.008454  559927 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0927 00:34:06.013211  559927 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 00:34:06.035213  559927 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0927 00:34:06.035367  559927 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 00:34:06.065128  559927 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0927 00:34:06.065196  559927 start.go:495] detecting cgroup driver to use...
	I0927 00:34:06.065243  559927 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0927 00:34:06.065323  559927 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 00:34:06.081824  559927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 00:34:06.093535  559927 docker.go:217] disabling cri-docker service (if available) ...
	I0927 00:34:06.093645  559927 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 00:34:06.108200  559927 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 00:34:06.123249  559927 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 00:34:06.207618  559927 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 00:34:06.299470  559927 docker.go:233] disabling docker service ...
	I0927 00:34:06.299551  559927 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 00:34:06.320068  559927 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 00:34:06.331991  559927 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 00:34:06.415970  559927 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 00:34:06.517135  559927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 00:34:06.528773  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 00:34:06.545373  559927 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 00:34:06.545478  559927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:34:06.555271  559927 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 00:34:06.555361  559927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:34:06.565035  559927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:34:06.574675  559927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:34:06.584230  559927 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 00:34:06.593099  559927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:34:06.602922  559927 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:34:06.618358  559927 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 00:34:06.628225  559927 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 00:34:06.636420  559927 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 00:34:06.644684  559927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:34:06.724669  559927 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 00:34:06.839759  559927 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 00:34:06.839877  559927 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 00:34:06.843772  559927 start.go:563] Will wait 60s for crictl version
	I0927 00:34:06.843909  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:34:06.847728  559927 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 00:34:06.886811  559927 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0927 00:34:06.886963  559927 ssh_runner.go:195] Run: crio --version
	I0927 00:34:06.923924  559927 ssh_runner.go:195] Run: crio --version
	I0927 00:34:06.961630  559927 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0927 00:34:06.964039  559927 cli_runner.go:164] Run: docker network inspect addons-220192 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0927 00:34:06.979344  559927 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0927 00:34:06.982885  559927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 00:34:06.993886  559927 kubeadm.go:883] updating cluster {Name:addons-220192 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-220192 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 00:34:06.994013  559927 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 00:34:06.994079  559927 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 00:34:07.065666  559927 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 00:34:07.065693  559927 crio.go:433] Images already preloaded, skipping extraction
	I0927 00:34:07.065759  559927 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 00:34:07.103089  559927 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 00:34:07.103111  559927 cache_images.go:84] Images are preloaded, skipping loading
	I0927 00:34:07.103119  559927 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0927 00:34:07.103212  559927 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-220192 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-220192 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 00:34:07.103294  559927 ssh_runner.go:195] Run: crio config
	I0927 00:34:07.184942  559927 cni.go:84] Creating CNI manager for ""
	I0927 00:34:07.185003  559927 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0927 00:34:07.185030  559927 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 00:34:07.185073  559927 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-220192 NodeName:addons-220192 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 00:34:07.185246  559927 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-220192"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 00:34:07.185338  559927 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 00:34:07.193935  559927 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 00:34:07.194048  559927 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 00:34:07.202460  559927 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0927 00:34:07.219678  559927 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 00:34:07.237053  559927 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0927 00:34:07.254481  559927 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0927 00:34:07.257688  559927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 00:34:07.268344  559927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:34:07.360228  559927 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 00:34:07.373741  559927 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192 for IP: 192.168.49.2
	I0927 00:34:07.373817  559927 certs.go:194] generating shared ca certs ...
	I0927 00:34:07.373850  559927 certs.go:226] acquiring lock for ca certs: {Name:mkd73b356b28d0818fea73c44481b0cb2597afbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:34:07.374052  559927 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-553751/.minikube/ca.key
	I0927 00:34:07.720680  559927 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-553751/.minikube/ca.crt ...
	I0927 00:34:07.720716  559927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-553751/.minikube/ca.crt: {Name:mkbfcd9c6c45e82aff1171fec506aac41dc5280a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:34:07.720931  559927 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-553751/.minikube/ca.key ...
	I0927 00:34:07.720946  559927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-553751/.minikube/ca.key: {Name:mk27b9aca1fe71da4c843dcf3c985bda93669b0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:34:07.721037  559927 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-553751/.minikube/proxy-client-ca.key
	I0927 00:34:09.101274  559927 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-553751/.minikube/proxy-client-ca.crt ...
	I0927 00:34:09.101305  559927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-553751/.minikube/proxy-client-ca.crt: {Name:mkdc0759b42a37859fc6068ba22254e0927be300 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:34:09.101947  559927 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-553751/.minikube/proxy-client-ca.key ...
	I0927 00:34:09.101964  559927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-553751/.minikube/proxy-client-ca.key: {Name:mke7b97bcbcb62de5f7a0ca1a1958a806a1e0ac9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:34:09.102051  559927 certs.go:256] generating profile certs ...
	I0927 00:34:09.102113  559927 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.key
	I0927 00:34:09.102130  559927 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.crt with IP's: []
	I0927 00:34:09.315290  559927 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.crt ...
	I0927 00:34:09.315324  559927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.crt: {Name:mkfff86d6c11512911cf0969854882c551536630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:34:09.315544  559927 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.key ...
	I0927 00:34:09.315558  559927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.key: {Name:mk1634c2995d45b5e8b115cffc851a552ceefda4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:34:09.315645  559927 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/apiserver.key.bb9babc9
	I0927 00:34:09.315665  559927 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/apiserver.crt.bb9babc9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0927 00:34:09.625710  559927 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/apiserver.crt.bb9babc9 ...
	I0927 00:34:09.625740  559927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/apiserver.crt.bb9babc9: {Name:mk7150966e38d5953f0ffbbca37251c426945939 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:34:09.625923  559927 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/apiserver.key.bb9babc9 ...
	I0927 00:34:09.625936  559927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/apiserver.key.bb9babc9: {Name:mk05d3eba820733b8f36b06f33f5470f331f3307 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:34:09.626021  559927 certs.go:381] copying /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/apiserver.crt.bb9babc9 -> /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/apiserver.crt
	I0927 00:34:09.626100  559927 certs.go:385] copying /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/apiserver.key.bb9babc9 -> /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/apiserver.key
	I0927 00:34:09.626154  559927 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/proxy-client.key
	I0927 00:34:09.626175  559927 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/proxy-client.crt with IP's: []
	I0927 00:34:10.552918  559927 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/proxy-client.crt ...
	I0927 00:34:10.552956  559927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/proxy-client.crt: {Name:mkf5cd4cf9e9eaebbd419908d7e57768395a038f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:34:10.553141  559927 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/proxy-client.key ...
	I0927 00:34:10.553160  559927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/proxy-client.key: {Name:mk5fec058a0a902adcdcf9089d18b3d6355794eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:34:10.553344  559927 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-553751/.minikube/certs/ca-key.pem (1679 bytes)
	I0927 00:34:10.553391  559927 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-553751/.minikube/certs/ca.pem (1078 bytes)
	I0927 00:34:10.553423  559927 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-553751/.minikube/certs/cert.pem (1123 bytes)
	I0927 00:34:10.553451  559927 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-553751/.minikube/certs/key.pem (1675 bytes)
	I0927 00:34:10.554112  559927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 00:34:10.580588  559927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0927 00:34:10.603802  559927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 00:34:10.628713  559927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0927 00:34:10.653540  559927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0927 00:34:10.677124  559927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0927 00:34:10.701503  559927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 00:34:10.724622  559927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 00:34:10.748189  559927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 00:34:10.772084  559927 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 00:34:10.789400  559927 ssh_runner.go:195] Run: openssl version
	I0927 00:34:10.794925  559927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 00:34:10.804621  559927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:34:10.808078  559927 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:34 /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:34:10.808143  559927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:34:10.814650  559927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 00:34:10.823722  559927 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 00:34:10.826819  559927 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0927 00:34:10.826870  559927 kubeadm.go:392] StartCluster: {Name:addons-220192 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-220192 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:34:10.826950  559927 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 00:34:10.827020  559927 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 00:34:10.866663  559927 cri.go:89] found id: ""
	I0927 00:34:10.866760  559927 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 00:34:10.875415  559927 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 00:34:10.883762  559927 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0927 00:34:10.883827  559927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 00:34:10.893704  559927 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 00:34:10.893724  559927 kubeadm.go:157] found existing configuration files:
	
	I0927 00:34:10.893774  559927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 00:34:10.902339  559927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 00:34:10.902423  559927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 00:34:10.910637  559927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 00:34:10.919187  559927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 00:34:10.919251  559927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 00:34:10.927057  559927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 00:34:10.935278  559927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 00:34:10.935346  559927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 00:34:10.943456  559927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 00:34:10.951694  559927 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 00:34:10.951762  559927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 00:34:10.959916  559927 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0927 00:34:10.995459  559927 kubeadm.go:310] W0927 00:34:10.994701    1188 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 00:34:10.996690  559927 kubeadm.go:310] W0927 00:34:10.996201    1188 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 00:34:11.020983  559927 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0927 00:34:11.080895  559927 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 00:34:29.763728  559927 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0927 00:34:29.763788  559927 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 00:34:29.763877  559927 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0927 00:34:29.763937  559927 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0927 00:34:29.764020  559927 kubeadm.go:310] OS: Linux
	I0927 00:34:29.764081  559927 kubeadm.go:310] CGROUPS_CPU: enabled
	I0927 00:34:29.764137  559927 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0927 00:34:29.764217  559927 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0927 00:34:29.764274  559927 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0927 00:34:29.764324  559927 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0927 00:34:29.764406  559927 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0927 00:34:29.764467  559927 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0927 00:34:29.764528  559927 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0927 00:34:29.764588  559927 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0927 00:34:29.764661  559927 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 00:34:29.764772  559927 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 00:34:29.764867  559927 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0927 00:34:29.764931  559927 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 00:34:29.766962  559927 out.go:235]   - Generating certificates and keys ...
	I0927 00:34:29.767068  559927 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 00:34:29.767153  559927 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 00:34:29.767232  559927 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0927 00:34:29.767300  559927 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0927 00:34:29.767387  559927 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0927 00:34:29.767453  559927 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0927 00:34:29.767527  559927 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0927 00:34:29.767659  559927 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-220192 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0927 00:34:29.767722  559927 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0927 00:34:29.767855  559927 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-220192 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0927 00:34:29.767928  559927 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0927 00:34:29.768001  559927 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0927 00:34:29.768051  559927 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0927 00:34:29.768131  559927 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 00:34:29.768206  559927 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 00:34:29.768283  559927 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0927 00:34:29.768353  559927 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 00:34:29.768436  559927 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 00:34:29.768511  559927 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 00:34:29.768606  559927 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 00:34:29.768699  559927 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 00:34:29.769783  559927 out.go:235]   - Booting up control plane ...
	I0927 00:34:29.769896  559927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 00:34:29.769989  559927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 00:34:29.770065  559927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 00:34:29.770172  559927 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 00:34:29.770279  559927 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 00:34:29.770329  559927 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 00:34:29.770469  559927 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0927 00:34:29.770575  559927 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0927 00:34:29.770637  559927 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.50140432s
	I0927 00:34:29.770724  559927 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0927 00:34:29.770784  559927 kubeadm.go:310] [api-check] The API server is healthy after 6.001791706s
	I0927 00:34:29.770893  559927 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0927 00:34:29.771024  559927 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0927 00:34:29.771086  559927 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0927 00:34:29.771270  559927 kubeadm.go:310] [mark-control-plane] Marking the node addons-220192 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0927 00:34:29.771331  559927 kubeadm.go:310] [bootstrap-token] Using token: 9ix9q6.4kz2sbtsprzpkswr
	I0927 00:34:29.773367  559927 out.go:235]   - Configuring RBAC rules ...
	I0927 00:34:29.773551  559927 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0927 00:34:29.773700  559927 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0927 00:34:29.773871  559927 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0927 00:34:29.774024  559927 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0927 00:34:29.774161  559927 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0927 00:34:29.774292  559927 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0927 00:34:29.774445  559927 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0927 00:34:29.774498  559927 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0927 00:34:29.774551  559927 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0927 00:34:29.774558  559927 kubeadm.go:310] 
	I0927 00:34:29.774618  559927 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0927 00:34:29.774626  559927 kubeadm.go:310] 
	I0927 00:34:29.774701  559927 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0927 00:34:29.774709  559927 kubeadm.go:310] 
	I0927 00:34:29.774754  559927 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0927 00:34:29.774813  559927 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0927 00:34:29.774870  559927 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0927 00:34:29.774879  559927 kubeadm.go:310] 
	I0927 00:34:29.774933  559927 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0927 00:34:29.774941  559927 kubeadm.go:310] 
	I0927 00:34:29.774988  559927 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0927 00:34:29.774996  559927 kubeadm.go:310] 
	I0927 00:34:29.775047  559927 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0927 00:34:29.775123  559927 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0927 00:34:29.775193  559927 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0927 00:34:29.775201  559927 kubeadm.go:310] 
	I0927 00:34:29.775284  559927 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0927 00:34:29.775362  559927 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0927 00:34:29.775370  559927 kubeadm.go:310] 
	I0927 00:34:29.775452  559927 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9ix9q6.4kz2sbtsprzpkswr \
	I0927 00:34:29.775556  559927 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d8dda315011cb74d53922a23f64d2f20e11a31a3286152848c02c6c9df47cdc \
	I0927 00:34:29.775579  559927 kubeadm.go:310] 	--control-plane 
	I0927 00:34:29.775584  559927 kubeadm.go:310] 
	I0927 00:34:29.775668  559927 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0927 00:34:29.775676  559927 kubeadm.go:310] 
	I0927 00:34:29.775757  559927 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9ix9q6.4kz2sbtsprzpkswr \
	I0927 00:34:29.775873  559927 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3d8dda315011cb74d53922a23f64d2f20e11a31a3286152848c02c6c9df47cdc 
	I0927 00:34:29.775887  559927 cni.go:84] Creating CNI manager for ""
	I0927 00:34:29.775895  559927 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0927 00:34:29.778035  559927 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0927 00:34:29.779166  559927 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0927 00:34:29.783667  559927 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0927 00:34:29.783687  559927 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0927 00:34:29.802342  559927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0927 00:34:30.115884  559927 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 00:34:30.116099  559927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:34:30.116240  559927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-220192 minikube.k8s.io/updated_at=2024_09_27T00_34_30_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625 minikube.k8s.io/name=addons-220192 minikube.k8s.io/primary=true
	I0927 00:34:30.127679  559927 ops.go:34] apiserver oom_adj: -16
	I0927 00:34:30.288090  559927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:34:30.788920  559927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:34:31.288744  559927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:34:31.788793  559927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:34:32.288933  559927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:34:32.788947  559927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:34:33.288195  559927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:34:33.380134  559927 kubeadm.go:1113] duration metric: took 3.264113362s to wait for elevateKubeSystemPrivileges
	I0927 00:34:33.380167  559927 kubeadm.go:394] duration metric: took 22.553300472s to StartCluster
	I0927 00:34:33.380185  559927 settings.go:142] acquiring lock: {Name:mk5b1f005001018637d448709269193603885722 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:34:33.380304  559927 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19711-553751/kubeconfig
	I0927 00:34:33.380761  559927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-553751/kubeconfig: {Name:mkc30ade55bf966f83b95c0af3746bfadfd3f379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:34:33.380969  559927 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 00:34:33.381135  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0927 00:34:33.381376  559927 config.go:182] Loaded profile config "addons-220192": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:34:33.381415  559927 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0927 00:34:33.381499  559927 addons.go:69] Setting yakd=true in profile "addons-220192"
	I0927 00:34:33.381517  559927 addons.go:234] Setting addon yakd=true in "addons-220192"
	I0927 00:34:33.381542  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:33.382036  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.382470  559927 addons.go:69] Setting metrics-server=true in profile "addons-220192"
	I0927 00:34:33.382492  559927 addons.go:234] Setting addon metrics-server=true in "addons-220192"
	I0927 00:34:33.382517  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:33.382550  559927 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-220192"
	I0927 00:34:33.382568  559927 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-220192"
	I0927 00:34:33.382595  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:33.382967  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.383084  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.383406  559927 out.go:177] * Verifying Kubernetes components...
	I0927 00:34:33.388011  559927 addons.go:69] Setting registry=true in profile "addons-220192"
	I0927 00:34:33.388044  559927 addons.go:234] Setting addon registry=true in "addons-220192"
	I0927 00:34:33.388084  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:33.388540  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.388723  559927 addons.go:69] Setting cloud-spanner=true in profile "addons-220192"
	I0927 00:34:33.388755  559927 addons.go:234] Setting addon cloud-spanner=true in "addons-220192"
	I0927 00:34:33.388797  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:33.389200  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.392076  559927 addons.go:69] Setting storage-provisioner=true in profile "addons-220192"
	I0927 00:34:33.392108  559927 addons.go:234] Setting addon storage-provisioner=true in "addons-220192"
	I0927 00:34:33.392149  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:33.392954  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.395344  559927 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-220192"
	I0927 00:34:33.395417  559927 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-220192"
	I0927 00:34:33.395737  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.396386  559927 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-220192"
	I0927 00:34:33.396450  559927 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-220192"
	I0927 00:34:33.396481  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:33.396929  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.404208  559927 addons.go:69] Setting volcano=true in profile "addons-220192"
	I0927 00:34:33.404292  559927 addons.go:234] Setting addon volcano=true in "addons-220192"
	I0927 00:34:33.404344  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:33.404886  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.415902  559927 addons.go:69] Setting default-storageclass=true in profile "addons-220192"
	I0927 00:34:33.415938  559927 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-220192"
	I0927 00:34:33.416335  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.419241  559927 addons.go:69] Setting volumesnapshots=true in profile "addons-220192"
	I0927 00:34:33.419284  559927 addons.go:234] Setting addon volumesnapshots=true in "addons-220192"
	I0927 00:34:33.419325  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:33.419808  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.436466  559927 addons.go:69] Setting gcp-auth=true in profile "addons-220192"
	I0927 00:34:33.436505  559927 mustload.go:65] Loading cluster: addons-220192
	I0927 00:34:33.436716  559927 config.go:182] Loaded profile config "addons-220192": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:34:33.436976  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.439910  559927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 00:34:33.454508  559927 addons.go:69] Setting ingress=true in profile "addons-220192"
	I0927 00:34:33.454557  559927 addons.go:234] Setting addon ingress=true in "addons-220192"
	I0927 00:34:33.454603  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:33.455134  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.470431  559927 addons.go:69] Setting ingress-dns=true in profile "addons-220192"
	I0927 00:34:33.470469  559927 addons.go:234] Setting addon ingress-dns=true in "addons-220192"
	I0927 00:34:33.470522  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:33.471029  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.480467  559927 addons.go:69] Setting inspektor-gadget=true in profile "addons-220192"
	I0927 00:34:33.480560  559927 addons.go:234] Setting addon inspektor-gadget=true in "addons-220192"
	I0927 00:34:33.480643  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:33.481279  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.501566  559927 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0927 00:34:33.502172  559927 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0927 00:34:33.515339  559927 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0927 00:34:33.515409  559927 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0927 00:34:33.515513  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:33.533114  559927 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 00:34:33.511884  559927 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0927 00:34:33.512482  559927 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I0927 00:34:33.533606  559927 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0927 00:34:33.534258  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:33.539191  559927 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-220192"
	I0927 00:34:33.539240  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:33.539680  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.555238  559927 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 00:34:33.555260  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 00:34:33.555320  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:33.575338  559927 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0927 00:34:33.575507  559927 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0927 00:34:33.579330  559927 out.go:177]   - Using image docker.io/registry:2.8.3
	I0927 00:34:33.579968  559927 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0927 00:34:33.579984  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0927 00:34:33.580043  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:33.589869  559927 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0927 00:34:33.589937  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0927 00:34:33.590044  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:33.592413  559927 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0927 00:34:33.592687  559927 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0927 00:34:33.592703  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0927 00:34:33.592762  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:33.594005  559927 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0927 00:34:33.594022  559927 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0927 00:34:33.594072  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:33.594614  559927 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0927 00:34:33.597708  559927 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0927 00:34:33.599885  559927 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0927 00:34:33.601815  559927 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0927 00:34:33.603187  559927 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0927 00:34:33.604424  559927 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0927 00:34:33.606160  559927 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0927 00:34:33.608900  559927 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0927 00:34:33.612242  559927 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0927 00:34:33.612266  559927 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0927 00:34:33.612345  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	W0927 00:34:33.625523  559927 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0927 00:34:33.636029  559927 addons.go:234] Setting addon default-storageclass=true in "addons-220192"
	I0927 00:34:33.636070  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:33.636475  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:33.653660  559927 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0927 00:34:33.658697  559927 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0927 00:34:33.662778  559927 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0927 00:34:33.663023  559927 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0927 00:34:33.663038  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0927 00:34:33.663104  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:33.676402  559927 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0927 00:34:33.705602  559927 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0927 00:34:33.705630  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0927 00:34:33.705724  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:33.728629  559927 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0927 00:34:33.732158  559927 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0927 00:34:33.732181  559927 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0927 00:34:33.732260  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:33.761441  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:33.777582  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:33.779733  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:33.781803  559927 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0927 00:34:33.785371  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:33.796375  559927 out.go:177]   - Using image docker.io/busybox:stable
	I0927 00:34:33.796498  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:33.799961  559927 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0927 00:34:33.799986  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0927 00:34:33.800052  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:33.803725  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:33.805040  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:33.827419  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:33.827850  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:33.868201  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:33.878799  559927 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 00:34:33.878821  559927 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 00:34:33.878995  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:33.889070  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:33.894820  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	W0927 00:34:33.897254  559927 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0927 00:34:33.897281  559927 retry.go:31] will retry after 222.514368ms: ssh: handshake failed: EOF
	I0927 00:34:33.899204  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:33.924221  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:34.099923  559927 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0927 00:34:34.099950  559927 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0927 00:34:34.143807  559927 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0927 00:34:34.143833  559927 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0927 00:34:34.150094  559927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0927 00:34:34.152840  559927 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0927 00:34:34.152862  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0927 00:34:34.152949  559927 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0927 00:34:34.152971  559927 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0927 00:34:34.228010  559927 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0927 00:34:34.228039  559927 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0927 00:34:34.241657  559927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0927 00:34:34.253784  559927 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0927 00:34:34.253808  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0927 00:34:34.256601  559927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0927 00:34:34.268169  559927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 00:34:34.271096  559927 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0927 00:34:34.271119  559927 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0927 00:34:34.275626  559927 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0927 00:34:34.275648  559927 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0927 00:34:34.293383  559927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0927 00:34:34.300829  559927 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0927 00:34:34.300856  559927 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0927 00:34:34.322150  559927 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0927 00:34:34.322176  559927 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0927 00:34:34.344962  559927 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0927 00:34:34.344989  559927 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0927 00:34:34.369058  559927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 00:34:34.404038  559927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0927 00:34:34.425344  559927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0927 00:34:34.432017  559927 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0927 00:34:34.432041  559927 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0927 00:34:34.435286  559927 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0927 00:34:34.435320  559927 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0927 00:34:34.435999  559927 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0927 00:34:34.436016  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0927 00:34:34.474152  559927 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0927 00:34:34.474181  559927 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0927 00:34:34.511874  559927 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 00:34:34.511910  559927 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0927 00:34:34.590980  559927 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0927 00:34:34.591007  559927 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0927 00:34:34.594814  559927 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0927 00:34:34.594884  559927 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0927 00:34:34.609412  559927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0927 00:34:34.664262  559927 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0927 00:34:34.664331  559927 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0927 00:34:34.667546  559927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 00:34:34.720328  559927 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0927 00:34:34.720354  559927 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0927 00:34:34.789427  559927 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0927 00:34:34.789454  559927 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0927 00:34:34.797435  559927 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.357437454s)
	I0927 00:34:34.797514  559927 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 00:34:34.797580  559927 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.416422565s)
	I0927 00:34:34.797731  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0927 00:34:34.820770  559927 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 00:34:34.820801  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0927 00:34:34.864725  559927 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0927 00:34:34.864753  559927 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0927 00:34:34.933391  559927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 00:34:34.981553  559927 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0927 00:34:34.981582  559927 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0927 00:34:35.002650  559927 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0927 00:34:35.002677  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0927 00:34:35.126608  559927 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0927 00:34:35.126635  559927 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0927 00:34:35.143210  559927 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0927 00:34:35.143238  559927 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0927 00:34:35.205388  559927 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0927 00:34:35.205414  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0927 00:34:35.215693  559927 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0927 00:34:35.215723  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0927 00:34:35.251131  559927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0927 00:34:35.275630  559927 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0927 00:34:35.275666  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0927 00:34:35.367653  559927 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0927 00:34:35.367680  559927 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0927 00:34:35.496151  559927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0927 00:34:37.834979  559927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.684849473s)
	I0927 00:34:39.467821  559927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.226126912s)
	I0927 00:34:39.467861  559927 addons.go:475] Verifying addon ingress=true in "addons-220192"
	I0927 00:34:39.468074  559927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.211440688s)
	I0927 00:34:39.468139  559927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.199948358s)
	I0927 00:34:39.468192  559927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.174786518s)
	I0927 00:34:39.468376  559927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.099295453s)
	I0927 00:34:39.468473  559927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.064403818s)
	I0927 00:34:39.468511  559927 addons.go:475] Verifying addon registry=true in "addons-220192"
	I0927 00:34:39.468878  559927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.04350358s)
	I0927 00:34:39.468943  559927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.859503354s)
	I0927 00:34:39.469053  559927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.801481487s)
	I0927 00:34:39.469062  559927 addons.go:475] Verifying addon metrics-server=true in "addons-220192"
	I0927 00:34:39.469120  559927 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.671373109s)
	I0927 00:34:39.469132  559927 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0927 00:34:39.469138  559927 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.671602913s)
	I0927 00:34:39.469967  559927 node_ready.go:35] waiting up to 6m0s for node "addons-220192" to be "Ready" ...
	I0927 00:34:39.472151  559927 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-220192 service yakd-dashboard -n yakd-dashboard
	
	I0927 00:34:39.472243  559927 out.go:177] * Verifying ingress addon...
	I0927 00:34:39.472289  559927 out.go:177] * Verifying registry addon...
	I0927 00:34:39.475538  559927 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0927 00:34:39.476423  559927 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0927 00:34:39.494665  559927 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0927 00:34:39.494694  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:39.496798  559927 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0927 00:34:39.496825  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0927 00:34:39.511262  559927 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0927 00:34:39.579923  559927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.328744084s)
	I0927 00:34:39.580128  559927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.646707554s)
	W0927 00:34:39.580156  559927 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0927 00:34:39.580183  559927 retry.go:31] will retry after 283.440734ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0927 00:34:39.831932  559927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.335725047s)
	I0927 00:34:39.831979  559927 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-220192"
	I0927 00:34:39.836412  559927 out.go:177] * Verifying csi-hostpath-driver addon...
	I0927 00:34:39.840109  559927 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0927 00:34:39.846548  559927 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0927 00:34:39.846621  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:39.864697  559927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 00:34:40.005609  559927 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-220192" context rescaled to 1 replicas
	I0927 00:34:40.006033  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:40.013393  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:40.344695  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:40.482976  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:40.484052  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:40.844800  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:40.983568  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:40.985312  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:41.344228  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:41.473653  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:34:41.480232  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:41.481108  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:41.844824  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:41.984071  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:41.984993  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:42.344135  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:42.481608  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:42.482992  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:42.819929  559927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.955146397s)
	I0927 00:34:42.845156  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:42.980034  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:42.980570  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:43.344660  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:43.464416  559927 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0927 00:34:43.464573  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:43.474433  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:34:43.481829  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:43.483496  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:43.483835  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:43.590588  559927 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0927 00:34:43.609201  559927 addons.go:234] Setting addon gcp-auth=true in "addons-220192"
	I0927 00:34:43.609254  559927 host.go:66] Checking if "addons-220192" exists ...
	I0927 00:34:43.609751  559927 cli_runner.go:164] Run: docker container inspect addons-220192 --format={{.State.Status}}
	I0927 00:34:43.626431  559927 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0927 00:34:43.626487  559927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-220192
	I0927 00:34:43.644327  559927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33501 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/addons-220192/id_rsa Username:docker}
	I0927 00:34:43.741116  559927 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0927 00:34:43.743530  559927 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0927 00:34:43.746014  559927 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0927 00:34:43.746031  559927 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0927 00:34:43.763769  559927 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0927 00:34:43.763793  559927 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0927 00:34:43.780969  559927 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0927 00:34:43.780996  559927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0927 00:34:43.799112  559927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0927 00:34:43.844675  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:43.980511  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:43.982005  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:44.322692  559927 addons.go:475] Verifying addon gcp-auth=true in "addons-220192"
	I0927 00:34:44.325770  559927 out.go:177] * Verifying gcp-auth addon...
	I0927 00:34:44.329465  559927 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0927 00:34:44.333766  559927 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0927 00:34:44.333790  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:44.344656  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:44.479817  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:44.480105  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:44.832869  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:44.844511  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:44.979614  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:44.980284  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:45.332817  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:45.343965  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:45.479741  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:45.481120  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:45.832899  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:45.844116  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:45.973317  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:34:45.979458  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:45.980299  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:46.332489  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:46.343738  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:46.479974  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:46.480735  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:46.833062  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:46.843843  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:46.979508  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:46.980073  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:47.333256  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:47.343452  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:47.479659  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:47.480382  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:47.832663  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:47.843746  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:47.973598  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:34:47.982398  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:47.983191  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:48.333001  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:48.343792  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:48.480415  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:48.480692  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:48.833104  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:48.843760  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:48.979483  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:48.980880  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:49.333641  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:49.344144  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:49.480257  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:49.483517  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:49.833431  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:49.844206  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:49.991059  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:34:49.992115  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:49.992352  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:50.332707  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:50.344159  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:50.480722  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:50.481738  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:50.833298  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:50.843495  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:50.979455  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:50.981405  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:51.334674  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:51.344002  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:51.479106  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:51.480280  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:51.833792  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:51.844086  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:51.982704  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:51.983622  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:52.333240  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:52.343403  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:52.474546  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:34:52.479449  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:52.482139  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:52.832804  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:52.843907  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:52.979328  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:52.980447  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:53.333021  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:53.343677  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:53.479431  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:53.480526  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:53.832723  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:53.843485  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:53.979263  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:53.979973  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:54.333522  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:54.348182  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:54.479005  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:54.480787  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:54.832509  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:54.844676  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:54.974064  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:34:54.979672  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:54.980722  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:55.333594  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:55.343740  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:55.479360  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:55.480245  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:55.832680  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:55.843543  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:55.979952  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:55.980389  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:56.332637  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:56.344144  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:56.479599  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:56.480801  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:56.832314  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:56.843591  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:56.979818  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:56.982964  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:57.333340  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:57.343648  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:57.473718  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:34:57.479686  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:57.480106  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:57.833276  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:57.843837  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:57.980259  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:57.980971  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:58.332941  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:58.344198  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:58.479441  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:58.480562  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:58.832511  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:58.843959  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:58.979304  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:58.979902  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:59.332471  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:59.343688  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:59.473837  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:34:59.480105  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:59.480820  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:34:59.833342  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:34:59.844089  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:34:59.979965  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:34:59.980877  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:00.334431  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:00.344836  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:00.479625  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:00.481083  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:00.833462  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:00.844379  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:00.979507  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:00.980347  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:01.333369  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:01.344056  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:01.480874  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:01.481106  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:01.833477  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:01.843808  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:01.973440  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:35:01.981517  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:01.981736  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:02.332928  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:02.344231  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:02.479408  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:02.480259  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:02.832727  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:02.843980  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:02.979737  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:02.980467  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:03.332964  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:03.343740  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:03.479543  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:03.480087  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:03.833215  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:03.844240  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:03.974500  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:35:03.980031  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:03.981606  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:04.332668  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:04.343749  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:04.479236  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:04.480360  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:04.833389  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:04.844094  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:04.980186  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:04.980297  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:05.332559  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:05.343815  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:05.479519  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:05.480644  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:05.832634  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:05.843675  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:05.979646  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:05.980528  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:06.332905  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:06.344008  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:06.473815  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:35:06.480097  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:06.480815  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:06.833469  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:06.844027  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:06.979148  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:06.980069  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:07.332568  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:07.343773  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:07.479920  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:07.479969  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:07.833963  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:07.843803  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:07.980212  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:07.980996  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:08.333337  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:08.343626  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:08.479786  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:08.480531  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:08.832973  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:08.844021  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:08.973090  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:35:08.980044  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:08.980573  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:09.332531  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:09.348321  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:09.479485  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:09.479813  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:09.833068  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:09.844031  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:09.979535  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:09.981261  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:10.333874  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:10.354135  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:10.484607  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:10.485964  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:10.832728  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:10.844943  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:10.973698  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:35:10.980277  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:10.980859  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:11.333372  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:11.345921  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:11.479342  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:11.480218  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:11.833074  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:11.844071  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:11.979619  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:11.981229  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:12.333379  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:12.344154  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:12.480895  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:12.481142  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:12.833217  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:12.843423  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:12.979301  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:12.980351  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:13.337392  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:13.343917  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:13.473805  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:35:13.479743  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:13.481489  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:13.832829  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:13.844071  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:13.979477  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:13.980477  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:14.332885  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:14.343685  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:14.479765  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:14.480539  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:14.832829  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:14.843971  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:14.980105  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:14.980578  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:15.332551  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:15.343348  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:15.479922  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:15.480686  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:15.833208  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:15.843933  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:15.973782  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:35:15.979898  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:15.980469  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:16.333214  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:16.344108  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:16.479743  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:16.480603  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:16.833361  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:16.843717  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:16.979315  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:16.980756  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:17.333389  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:17.343864  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:17.480054  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:17.480955  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:17.833334  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:17.843911  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:17.979629  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:17.980181  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:18.332516  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:18.343396  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:18.473097  559927 node_ready.go:53] node "addons-220192" has status "Ready":"False"
	I0927 00:35:18.479374  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:18.479963  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:18.832640  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:18.844049  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:18.996522  559927 node_ready.go:49] node "addons-220192" has status "Ready":"True"
	I0927 00:35:18.996599  559927 node_ready.go:38] duration metric: took 39.526610666s for node "addons-220192" to be "Ready" ...
	I0927 00:35:18.996626  559927 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 00:35:19.019040  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:19.023994  559927 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0927 00:35:19.024068  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:19.032376  559927 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-wnhpd" in "kube-system" namespace to be "Ready" ...
	I0927 00:35:19.398908  559927 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0927 00:35:19.398987  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:19.399566  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:19.483156  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:19.490619  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:19.833611  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:19.852005  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:20.016049  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:20.016250  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:20.347509  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:20.351821  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:20.481433  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:20.482332  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:20.542199  559927 pod_ready.go:93] pod "coredns-7c65d6cfc9-wnhpd" in "kube-system" namespace has status "Ready":"True"
	I0927 00:35:20.542229  559927 pod_ready.go:82] duration metric: took 1.509780007s for pod "coredns-7c65d6cfc9-wnhpd" in "kube-system" namespace to be "Ready" ...
	I0927 00:35:20.542251  559927 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-220192" in "kube-system" namespace to be "Ready" ...
	I0927 00:35:20.548166  559927 pod_ready.go:93] pod "etcd-addons-220192" in "kube-system" namespace has status "Ready":"True"
	I0927 00:35:20.548192  559927 pod_ready.go:82] duration metric: took 5.932914ms for pod "etcd-addons-220192" in "kube-system" namespace to be "Ready" ...
	I0927 00:35:20.548207  559927 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-220192" in "kube-system" namespace to be "Ready" ...
	I0927 00:35:20.553717  559927 pod_ready.go:93] pod "kube-apiserver-addons-220192" in "kube-system" namespace has status "Ready":"True"
	I0927 00:35:20.553741  559927 pod_ready.go:82] duration metric: took 5.524718ms for pod "kube-apiserver-addons-220192" in "kube-system" namespace to be "Ready" ...
	I0927 00:35:20.553754  559927 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-220192" in "kube-system" namespace to be "Ready" ...
	I0927 00:35:20.559029  559927 pod_ready.go:93] pod "kube-controller-manager-addons-220192" in "kube-system" namespace has status "Ready":"True"
	I0927 00:35:20.559057  559927 pod_ready.go:82] duration metric: took 5.294414ms for pod "kube-controller-manager-addons-220192" in "kube-system" namespace to be "Ready" ...
	I0927 00:35:20.559071  559927 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-shqd9" in "kube-system" namespace to be "Ready" ...
	I0927 00:35:20.573997  559927 pod_ready.go:93] pod "kube-proxy-shqd9" in "kube-system" namespace has status "Ready":"True"
	I0927 00:35:20.574023  559927 pod_ready.go:82] duration metric: took 14.944163ms for pod "kube-proxy-shqd9" in "kube-system" namespace to be "Ready" ...
	I0927 00:35:20.574036  559927 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-220192" in "kube-system" namespace to be "Ready" ...
	I0927 00:35:20.833824  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:20.848660  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:20.974442  559927 pod_ready.go:93] pod "kube-scheduler-addons-220192" in "kube-system" namespace has status "Ready":"True"
	I0927 00:35:20.974470  559927 pod_ready.go:82] duration metric: took 400.425942ms for pod "kube-scheduler-addons-220192" in "kube-system" namespace to be "Ready" ...
	I0927 00:35:20.974484  559927 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace to be "Ready" ...
	I0927 00:35:20.982452  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:20.984121  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:21.333221  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:21.345136  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:21.482607  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:21.483622  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:21.833129  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:21.845258  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:21.981612  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:21.982849  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:22.333804  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:22.345228  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:22.481208  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:22.482132  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:22.833026  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:22.845328  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:22.980591  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:22.981225  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:22.984148  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:23.332828  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:23.345437  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:23.480956  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:23.481629  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:23.833324  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:23.845811  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:23.980489  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:23.981126  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:24.334215  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:24.345777  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:24.492856  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:24.501358  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:24.833375  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:24.845765  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:24.984320  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:24.985535  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:25.333030  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:25.346129  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:25.483387  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:25.483462  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:25.491536  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:25.833367  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:25.845582  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:25.986028  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:25.987700  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:26.333088  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:26.347436  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:26.482707  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:26.485635  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:26.835052  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:26.936552  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:26.991369  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:26.993292  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:27.333040  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:27.349818  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:27.490040  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:27.500797  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:27.502364  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:27.833179  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:27.844956  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:27.987680  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:27.989267  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:28.334430  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:28.345015  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:28.482024  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:28.482969  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:28.834146  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:28.845784  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:28.981547  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:28.987897  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:29.332824  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:29.345018  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:29.481343  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:29.483392  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:29.833401  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:29.845939  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:29.983969  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:29.986347  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:29.991317  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:30.333446  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:30.344995  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:30.508060  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:30.509114  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:30.833954  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:30.847331  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:30.983296  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:30.984469  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:31.333529  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:31.346615  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:31.483463  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:31.485699  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:31.834409  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:31.847606  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:31.990264  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:31.991499  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:31.995169  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:32.333938  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:32.345440  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:32.493919  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:32.495619  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:32.838133  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:32.848315  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:33.004360  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:33.006597  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:33.334374  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:33.348157  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:33.487589  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:33.488353  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:33.833623  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:33.845948  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:34.000333  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:34.002102  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:34.006988  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:34.352293  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:34.359508  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:34.502221  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:34.503150  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:34.835304  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:34.865176  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:34.985218  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:34.985823  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:35.334075  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:35.345971  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:35.483800  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:35.491250  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:35.833110  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:35.846328  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:35.979803  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:35.982985  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:36.335407  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:36.345098  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:36.481660  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:36.481954  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:36.483328  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:36.832836  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:36.844919  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:36.982758  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:36.984021  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:37.332859  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:37.344703  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:37.479523  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:37.482358  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:37.833392  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:37.845097  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:37.981768  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:37.982364  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:38.333562  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:38.346750  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:38.538171  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:38.539659  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:38.574486  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:38.833410  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:38.845154  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:38.983941  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:38.986331  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:39.333236  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:39.344860  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:39.487423  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:39.488653  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:39.833699  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:39.845135  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:39.982293  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:39.983320  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:40.334049  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:40.345576  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:40.487727  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:40.489357  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:40.850545  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:40.869817  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:40.988622  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:40.997340  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:40.999067  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:41.333838  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:41.344941  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:41.481094  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:41.482258  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:41.833163  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:41.844771  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:41.983305  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:41.984333  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:42.334272  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:42.345229  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:42.492644  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:42.493566  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:42.832709  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:42.851142  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:42.983002  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:42.987339  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:43.333193  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:43.345053  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:43.483125  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:43.484113  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:43.488641  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:43.833337  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:43.845279  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:43.980602  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:43.984005  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:44.333444  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:44.345218  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:44.481670  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:44.482647  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:44.835774  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:44.845367  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:44.995835  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:44.998309  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:45.333453  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:45.345157  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:45.480354  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:45.484276  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:45.833765  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:45.845022  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:45.982788  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:45.986074  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:45.988189  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:46.333646  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:46.346350  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:46.491046  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:46.492583  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:46.835571  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:46.846801  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:46.981975  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:46.983265  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:47.333111  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:47.345419  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:47.484650  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:47.489278  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:47.832786  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:47.845960  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:47.991387  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:47.992583  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:48.333677  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:48.347026  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:48.492253  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:48.493184  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:48.499877  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:48.833921  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:48.845808  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:48.979562  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:48.982627  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:49.333741  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:49.344581  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:49.480529  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:49.480919  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:49.833732  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:49.845393  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:49.981677  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:49.982936  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:50.333400  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:50.346044  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:50.480790  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:50.483023  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:50.833421  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:50.849074  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:50.981931  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:50.989634  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:50.995853  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:51.334696  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:51.348991  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:51.491426  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:51.492330  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:51.833618  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:51.844626  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:51.984195  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:51.985302  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:52.334919  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:52.344890  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:52.483430  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:52.484577  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:52.833804  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:52.845966  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:52.980535  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:52.981657  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:53.333493  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:53.345580  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:53.481301  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:53.482899  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:53.483553  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:53.833110  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:53.845938  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:53.996740  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:53.998174  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:54.334265  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:54.345544  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:54.488077  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:54.489088  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:54.833856  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:54.846893  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:54.982313  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:54.984449  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:55.333590  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:55.345439  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:55.481901  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:55.483756  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:55.484959  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:55.833795  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:55.846912  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:55.985194  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:55.986869  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:56.332981  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:56.345961  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:56.484347  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:56.485464  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:56.834149  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:56.849037  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:56.982925  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:56.986831  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:57.333287  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:57.344956  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:57.481955  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:57.492325  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:57.493676  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:57.833426  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:57.844766  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:57.982873  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:57.984241  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:58.334364  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:58.346131  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:58.492147  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:58.492947  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:58.834054  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:58.853019  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:58.991069  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:58.992535  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:59.333737  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:59.346124  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:59.495213  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:59.495807  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:35:59.496471  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:35:59.833938  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:35:59.845169  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:35:59.983223  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:35:59.984276  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:00.333940  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:00.345113  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:00.481959  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:00.482968  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:36:00.834016  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:00.845460  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:00.984100  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:36:00.985224  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:01.332734  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:01.344581  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:01.486492  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:36:01.487076  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:01.833007  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:01.844703  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:01.981792  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:01.982901  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:01.983764  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:36:02.334761  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:02.345539  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:02.487447  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:02.491829  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:36:02.834434  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:02.845976  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:02.984185  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:36:02.987921  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:03.334009  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:03.345775  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:03.481785  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:03.481999  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:36:03.834559  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:03.846410  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:03.982047  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:03.983140  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:36:03.986356  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:04.334016  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:04.345507  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:04.482381  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:36:04.483345  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:04.833296  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:04.845032  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:04.983083  559927 kapi.go:107] duration metric: took 1m25.506656031s to wait for kubernetes.io/minikube-addons=registry ...
	I0927 00:36:04.983755  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:05.334049  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:05.345555  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:05.480336  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:05.833772  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:05.845009  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:05.982793  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:06.334193  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:06.346939  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:06.482860  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:06.484360  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:06.833274  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:06.844879  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:06.982428  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:07.332952  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:07.347731  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:07.482480  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:07.833289  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:07.844648  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:07.980267  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:08.333858  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:08.345865  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:08.481076  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:08.483138  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:08.835184  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:08.845444  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:08.987050  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:09.334706  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:09.348925  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:09.482708  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:09.834286  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:09.845038  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:09.986190  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:10.333090  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:10.344775  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:10.480737  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:10.833646  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:36:10.846522  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:10.982188  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:10.982779  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:11.333798  559927 kapi.go:107] duration metric: took 1m27.004325034s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0927 00:36:11.335762  559927 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-220192 cluster.
	I0927 00:36:11.337808  559927 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0927 00:36:11.339463  559927 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0927 00:36:11.344308  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:11.480962  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:11.846998  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:11.989166  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:12.345349  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:12.483345  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:12.845611  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:12.985783  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:12.987818  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:13.345705  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:13.483215  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:13.844991  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:13.984190  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:14.345761  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:14.483266  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:14.848904  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:14.983719  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:15.344480  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:15.486603  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:15.492777  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:15.846650  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:15.979870  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:16.345708  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:16.480932  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:16.845136  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:16.982088  559927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 00:36:17.345624  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:17.482482  559927 kapi.go:107] duration metric: took 1m38.006940645s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0927 00:36:17.844816  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:17.984704  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:18.345226  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:18.845178  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:19.349482  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:19.846085  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:20.349935  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:20.481081  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:20.845700  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:21.345969  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:21.844863  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:22.345753  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:22.845200  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:22.981147  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:23.346423  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:23.845463  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:24.345795  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:24.845049  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:25.345257  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:25.484602  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:25.846829  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:26.347013  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:26.845138  559927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:36:27.345506  559927 kapi.go:107] duration metric: took 1m47.50539711s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0927 00:36:27.348020  559927 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, storage-provisioner, ingress-dns, metrics-server, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I0927 00:36:27.351337  559927 addons.go:510] duration metric: took 1m53.969914524s for enable addons: enabled=[cloud-spanner nvidia-device-plugin storage-provisioner ingress-dns metrics-server yakd default-storageclass inspektor-gadget volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I0927 00:36:27.980368  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:29.982001  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:32.481885  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:34.980951  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:36.981764  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:39.480929  559927 pod_ready.go:103] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"False"
	I0927 00:36:39.981626  559927 pod_ready.go:93] pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace has status "Ready":"True"
	I0927 00:36:39.981655  559927 pod_ready.go:82] duration metric: took 1m19.007136304s for pod "metrics-server-84c5f94fbc-zpbj2" in "kube-system" namespace to be "Ready" ...
	I0927 00:36:39.981668  559927 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-dqrvw" in "kube-system" namespace to be "Ready" ...
	I0927 00:36:39.986994  559927 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-dqrvw" in "kube-system" namespace has status "Ready":"True"
	I0927 00:36:39.987021  559927 pod_ready.go:82] duration metric: took 5.342068ms for pod "nvidia-device-plugin-daemonset-dqrvw" in "kube-system" namespace to be "Ready" ...
	I0927 00:36:39.987044  559927 pod_ready.go:39] duration metric: took 1m20.990388006s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 00:36:39.987060  559927 api_server.go:52] waiting for apiserver process to appear ...
	I0927 00:36:39.987091  559927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 00:36:39.987152  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 00:36:40.044709  559927 cri.go:89] found id: "04b9c719c715f318e0da018097c22f147000bd0fb64d781731fa9eb3b3c51395"
	I0927 00:36:40.044730  559927 cri.go:89] found id: ""
	I0927 00:36:40.044737  559927 logs.go:276] 1 containers: [04b9c719c715f318e0da018097c22f147000bd0fb64d781731fa9eb3b3c51395]
	I0927 00:36:40.044793  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:36:40.049159  559927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 00:36:40.049232  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 00:36:40.092137  559927 cri.go:89] found id: "6b36b1e46732bafa997c4e66766a4bb0cd5ea7487006b7a6ba9e5860f1743a6e"
	I0927 00:36:40.092160  559927 cri.go:89] found id: ""
	I0927 00:36:40.092168  559927 logs.go:276] 1 containers: [6b36b1e46732bafa997c4e66766a4bb0cd5ea7487006b7a6ba9e5860f1743a6e]
	I0927 00:36:40.092226  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:36:40.095880  559927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 00:36:40.095952  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 00:36:40.136619  559927 cri.go:89] found id: "1a8d7c13a871933275d3e84e87c063e55c9ed4adff23be36d5ea4bfa8accbcd6"
	I0927 00:36:40.136643  559927 cri.go:89] found id: ""
	I0927 00:36:40.136651  559927 logs.go:276] 1 containers: [1a8d7c13a871933275d3e84e87c063e55c9ed4adff23be36d5ea4bfa8accbcd6]
	I0927 00:36:40.136728  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:36:40.140255  559927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 00:36:40.140338  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 00:36:40.191576  559927 cri.go:89] found id: "555dc55ff545e165f45bde68c31f0843d0f21041ba3fea37def560aea920dcc5"
	I0927 00:36:40.191596  559927 cri.go:89] found id: ""
	I0927 00:36:40.191603  559927 logs.go:276] 1 containers: [555dc55ff545e165f45bde68c31f0843d0f21041ba3fea37def560aea920dcc5]
	I0927 00:36:40.191664  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:36:40.195147  559927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 00:36:40.195228  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 00:36:40.232473  559927 cri.go:89] found id: "5e3fe54c99e931cc6b0b654e967a2638c30374abdabe2c1174d5f6a3fff11315"
	I0927 00:36:40.232496  559927 cri.go:89] found id: ""
	I0927 00:36:40.232504  559927 logs.go:276] 1 containers: [5e3fe54c99e931cc6b0b654e967a2638c30374abdabe2c1174d5f6a3fff11315]
	I0927 00:36:40.232560  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:36:40.236094  559927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 00:36:40.236166  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 00:36:40.273140  559927 cri.go:89] found id: "2bfc8d78fdf58256c4a5925537af21cdbf3dbd66127f8a15b8101f92fb8a78c9"
	I0927 00:36:40.273163  559927 cri.go:89] found id: ""
	I0927 00:36:40.273170  559927 logs.go:276] 1 containers: [2bfc8d78fdf58256c4a5925537af21cdbf3dbd66127f8a15b8101f92fb8a78c9]
	I0927 00:36:40.273258  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:36:40.276617  559927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 00:36:40.276695  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 00:36:40.313852  559927 cri.go:89] found id: "d7a7261efecf3162ccc2d26ed432451c900af8b4d1487407d7ce2be5094281b5"
	I0927 00:36:40.313876  559927 cri.go:89] found id: ""
	I0927 00:36:40.313885  559927 logs.go:276] 1 containers: [d7a7261efecf3162ccc2d26ed432451c900af8b4d1487407d7ce2be5094281b5]
	I0927 00:36:40.313941  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:36:40.317368  559927 logs.go:123] Gathering logs for kubelet ...
	I0927 00:36:40.317391  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 00:36:40.354686  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:35 addons-220192 kubelet[1511]: W0927 00:34:35.883351    1511 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-220192" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object
	W0927 00:36:40.354935  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:35 addons-220192 kubelet[1511]: E0927 00:34:35.883402    1511 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-220192\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-220192' and this object" logger="UnhandledError"
	W0927 00:36:40.355126  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:35 addons-220192 kubelet[1511]: W0927 00:34:35.916164    1511 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-220192" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object
	W0927 00:36:40.355357  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:35 addons-220192 kubelet[1511]: E0927 00:34:35.916217    1511 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-220192\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-220192' and this object" logger="UnhandledError"
	W0927 00:36:40.356718  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.200959    1511 projected.go:194] Error preparing data for projected volume kube-api-access-8sq56 for pod kube-system/kindnet-4rr4t: [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0927 00:36:40.357232  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.201058    1511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/afd40f83-7a79-4edc-bbfc-ff6936a3158e-kube-api-access-8sq56 podName:afd40f83-7a79-4edc-bbfc-ff6936a3158e nodeName:}" failed. No retries permitted until 2024-09-27 00:34:37.70103236 +0000 UTC m=+8.728897654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8sq56" (UniqueName: "kubernetes.io/projected/afd40f83-7a79-4edc-bbfc-ff6936a3158e-kube-api-access-8sq56") pod "kindnet-4rr4t" (UID: "afd40f83-7a79-4edc-bbfc-ff6936a3158e") : [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condi
tion]
	W0927 00:36:40.357591  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.420880    1511 projected.go:194] Error preparing data for projected volume kube-api-access-pfjql for pod kube-system/kube-proxy-shqd9: [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0927 00:36:40.358101  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.420948    1511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/476cb0de-772b-4e25-ac8c-7244a6d392e7-kube-api-access-pfjql podName:476cb0de-772b-4e25-ac8c-7244a6d392e7 nodeName:}" failed. No retries permitted until 2024-09-27 00:34:37.920927906 +0000 UTC m=+8.948793201 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pfjql" (UniqueName: "kubernetes.io/projected/476cb0de-772b-4e25-ac8c-7244a6d392e7-kube-api-access-pfjql") pod "kube-proxy-shqd9" (UID: "476cb0de-772b-4e25-ac8c-7244a6d392e7") : [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for th
e condition]
	I0927 00:36:40.415196  559927 logs.go:123] Gathering logs for kube-controller-manager [2bfc8d78fdf58256c4a5925537af21cdbf3dbd66127f8a15b8101f92fb8a78c9] ...
	I0927 00:36:40.415235  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2bfc8d78fdf58256c4a5925537af21cdbf3dbd66127f8a15b8101f92fb8a78c9"
	I0927 00:36:40.520289  559927 logs.go:123] Gathering logs for kindnet [d7a7261efecf3162ccc2d26ed432451c900af8b4d1487407d7ce2be5094281b5] ...
	I0927 00:36:40.520324  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7a7261efecf3162ccc2d26ed432451c900af8b4d1487407d7ce2be5094281b5"
	I0927 00:36:40.569490  559927 logs.go:123] Gathering logs for kube-scheduler [555dc55ff545e165f45bde68c31f0843d0f21041ba3fea37def560aea920dcc5] ...
	I0927 00:36:40.569523  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 555dc55ff545e165f45bde68c31f0843d0f21041ba3fea37def560aea920dcc5"
	I0927 00:36:40.620143  559927 logs.go:123] Gathering logs for kube-proxy [5e3fe54c99e931cc6b0b654e967a2638c30374abdabe2c1174d5f6a3fff11315] ...
	I0927 00:36:40.620183  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e3fe54c99e931cc6b0b654e967a2638c30374abdabe2c1174d5f6a3fff11315"
	I0927 00:36:40.663881  559927 logs.go:123] Gathering logs for CRI-O ...
	I0927 00:36:40.663911  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 00:36:40.763619  559927 logs.go:123] Gathering logs for dmesg ...
	I0927 00:36:40.763658  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 00:36:40.779898  559927 logs.go:123] Gathering logs for describe nodes ...
	I0927 00:36:40.779926  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 00:36:40.969685  559927 logs.go:123] Gathering logs for kube-apiserver [04b9c719c715f318e0da018097c22f147000bd0fb64d781731fa9eb3b3c51395] ...
	I0927 00:36:40.969715  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04b9c719c715f318e0da018097c22f147000bd0fb64d781731fa9eb3b3c51395"
	I0927 00:36:41.024968  559927 logs.go:123] Gathering logs for etcd [6b36b1e46732bafa997c4e66766a4bb0cd5ea7487006b7a6ba9e5860f1743a6e] ...
	I0927 00:36:41.025001  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b36b1e46732bafa997c4e66766a4bb0cd5ea7487006b7a6ba9e5860f1743a6e"
	I0927 00:36:41.081642  559927 logs.go:123] Gathering logs for coredns [1a8d7c13a871933275d3e84e87c063e55c9ed4adff23be36d5ea4bfa8accbcd6] ...
	I0927 00:36:41.081676  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a8d7c13a871933275d3e84e87c063e55c9ed4adff23be36d5ea4bfa8accbcd6"
	I0927 00:36:41.120059  559927 logs.go:123] Gathering logs for container status ...
	I0927 00:36:41.120093  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 00:36:41.178658  559927 out.go:358] Setting ErrFile to fd 2...
	I0927 00:36:41.178684  559927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 00:36:41.178749  559927 out.go:270] X Problems detected in kubelet:
	W0927 00:36:41.178763  559927 out.go:270]   Sep 27 00:34:35 addons-220192 kubelet[1511]: E0927 00:34:35.916217    1511 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-220192\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-220192' and this object" logger="UnhandledError"
	W0927 00:36:41.178772  559927 out.go:270]   Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.200959    1511 projected.go:194] Error preparing data for projected volume kube-api-access-8sq56 for pod kube-system/kindnet-4rr4t: [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0927 00:36:41.178787  559927 out.go:270]   Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.201058    1511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/afd40f83-7a79-4edc-bbfc-ff6936a3158e-kube-api-access-8sq56 podName:afd40f83-7a79-4edc-bbfc-ff6936a3158e nodeName:}" failed. No retries permitted until 2024-09-27 00:34:37.70103236 +0000 UTC m=+8.728897654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8sq56" (UniqueName: "kubernetes.io/projected/afd40f83-7a79-4edc-bbfc-ff6936a3158e-kube-api-access-8sq56") pod "kindnet-4rr4t" (UID: "afd40f83-7a79-4edc-bbfc-ff6936a3158e") : [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0927 00:36:41.178794  559927 out.go:270]   Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.420880    1511 projected.go:194] Error preparing data for projected volume kube-api-access-pfjql for pod kube-system/kube-proxy-shqd9: [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0927 00:36:41.178804  559927 out.go:270]   Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.420948    1511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/476cb0de-772b-4e25-ac8c-7244a6d392e7-kube-api-access-pfjql podName:476cb0de-772b-4e25-ac8c-7244a6d392e7 nodeName:}" failed. No retries permitted until 2024-09-27 00:34:37.920927906 +0000 UTC m=+8.948793201 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pfjql" (UniqueName: "kubernetes.io/projected/476cb0de-772b-4e25-ac8c-7244a6d392e7-kube-api-access-pfjql") pod "kube-proxy-shqd9" (UID: "476cb0de-772b-4e25-ac8c-7244a6d392e7") : [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	I0927 00:36:41.178810  559927 out.go:358] Setting ErrFile to fd 2...
	I0927 00:36:41.178816  559927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:36:51.180508  559927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 00:36:51.193914  559927 api_server.go:72] duration metric: took 2m17.812908825s to wait for apiserver process to appear ...
	I0927 00:36:51.193938  559927 api_server.go:88] waiting for apiserver healthz status ...
	I0927 00:36:51.193970  559927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 00:36:51.194024  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 00:36:51.258037  559927 cri.go:89] found id: "04b9c719c715f318e0da018097c22f147000bd0fb64d781731fa9eb3b3c51395"
	I0927 00:36:51.258058  559927 cri.go:89] found id: ""
	I0927 00:36:51.258066  559927 logs.go:276] 1 containers: [04b9c719c715f318e0da018097c22f147000bd0fb64d781731fa9eb3b3c51395]
	I0927 00:36:51.258120  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:36:51.261573  559927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 00:36:51.261654  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 00:36:51.300961  559927 cri.go:89] found id: "6b36b1e46732bafa997c4e66766a4bb0cd5ea7487006b7a6ba9e5860f1743a6e"
	I0927 00:36:51.300984  559927 cri.go:89] found id: ""
	I0927 00:36:51.300993  559927 logs.go:276] 1 containers: [6b36b1e46732bafa997c4e66766a4bb0cd5ea7487006b7a6ba9e5860f1743a6e]
	I0927 00:36:51.301047  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:36:51.304390  559927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 00:36:51.304462  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 00:36:51.344486  559927 cri.go:89] found id: "1a8d7c13a871933275d3e84e87c063e55c9ed4adff23be36d5ea4bfa8accbcd6"
	I0927 00:36:51.344509  559927 cri.go:89] found id: ""
	I0927 00:36:51.344517  559927 logs.go:276] 1 containers: [1a8d7c13a871933275d3e84e87c063e55c9ed4adff23be36d5ea4bfa8accbcd6]
	I0927 00:36:51.344572  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:36:51.348065  559927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 00:36:51.348139  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 00:36:51.384964  559927 cri.go:89] found id: "555dc55ff545e165f45bde68c31f0843d0f21041ba3fea37def560aea920dcc5"
	I0927 00:36:51.384988  559927 cri.go:89] found id: ""
	I0927 00:36:51.384996  559927 logs.go:276] 1 containers: [555dc55ff545e165f45bde68c31f0843d0f21041ba3fea37def560aea920dcc5]
	I0927 00:36:51.385080  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:36:51.388530  559927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 00:36:51.388601  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 00:36:51.426096  559927 cri.go:89] found id: "5e3fe54c99e931cc6b0b654e967a2638c30374abdabe2c1174d5f6a3fff11315"
	I0927 00:36:51.426119  559927 cri.go:89] found id: ""
	I0927 00:36:51.426127  559927 logs.go:276] 1 containers: [5e3fe54c99e931cc6b0b654e967a2638c30374abdabe2c1174d5f6a3fff11315]
	I0927 00:36:51.426183  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:36:51.429629  559927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 00:36:51.429716  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 00:36:51.466515  559927 cri.go:89] found id: "2bfc8d78fdf58256c4a5925537af21cdbf3dbd66127f8a15b8101f92fb8a78c9"
	I0927 00:36:51.466536  559927 cri.go:89] found id: ""
	I0927 00:36:51.466544  559927 logs.go:276] 1 containers: [2bfc8d78fdf58256c4a5925537af21cdbf3dbd66127f8a15b8101f92fb8a78c9]
	I0927 00:36:51.466604  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:36:51.470090  559927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 00:36:51.470164  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 00:36:51.509078  559927 cri.go:89] found id: "d7a7261efecf3162ccc2d26ed432451c900af8b4d1487407d7ce2be5094281b5"
	I0927 00:36:51.509100  559927 cri.go:89] found id: ""
	I0927 00:36:51.509107  559927 logs.go:276] 1 containers: [d7a7261efecf3162ccc2d26ed432451c900af8b4d1487407d7ce2be5094281b5]
	I0927 00:36:51.509161  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:36:51.512599  559927 logs.go:123] Gathering logs for kube-controller-manager [2bfc8d78fdf58256c4a5925537af21cdbf3dbd66127f8a15b8101f92fb8a78c9] ...
	I0927 00:36:51.512667  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2bfc8d78fdf58256c4a5925537af21cdbf3dbd66127f8a15b8101f92fb8a78c9"
	I0927 00:36:51.606345  559927 logs.go:123] Gathering logs for kindnet [d7a7261efecf3162ccc2d26ed432451c900af8b4d1487407d7ce2be5094281b5] ...
	I0927 00:36:51.606381  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7a7261efecf3162ccc2d26ed432451c900af8b4d1487407d7ce2be5094281b5"
	I0927 00:36:51.648842  559927 logs.go:123] Gathering logs for CRI-O ...
	I0927 00:36:51.648870  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 00:36:51.751992  559927 logs.go:123] Gathering logs for container status ...
	I0927 00:36:51.752031  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 00:36:51.802535  559927 logs.go:123] Gathering logs for kubelet ...
	I0927 00:36:51.802567  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 00:36:51.843443  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:35 addons-220192 kubelet[1511]: W0927 00:34:35.883351    1511 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-220192" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object
	W0927 00:36:51.843686  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:35 addons-220192 kubelet[1511]: E0927 00:34:35.883402    1511 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-220192\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-220192' and this object" logger="UnhandledError"
	W0927 00:36:51.843879  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:35 addons-220192 kubelet[1511]: W0927 00:34:35.916164    1511 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-220192" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object
	W0927 00:36:51.844104  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:35 addons-220192 kubelet[1511]: E0927 00:34:35.916217    1511 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-220192\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-220192' and this object" logger="UnhandledError"
	W0927 00:36:51.845476  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.200959    1511 projected.go:194] Error preparing data for projected volume kube-api-access-8sq56 for pod kube-system/kindnet-4rr4t: [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0927 00:36:51.845988  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.201058    1511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/afd40f83-7a79-4edc-bbfc-ff6936a3158e-kube-api-access-8sq56 podName:afd40f83-7a79-4edc-bbfc-ff6936a3158e nodeName:}" failed. No retries permitted until 2024-09-27 00:34:37.70103236 +0000 UTC m=+8.728897654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8sq56" (UniqueName: "kubernetes.io/projected/afd40f83-7a79-4edc-bbfc-ff6936a3158e-kube-api-access-8sq56") pod "kindnet-4rr4t" (UID: "afd40f83-7a79-4edc-bbfc-ff6936a3158e") : [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condi
tion]
	W0927 00:36:51.846347  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.420880    1511 projected.go:194] Error preparing data for projected volume kube-api-access-pfjql for pod kube-system/kube-proxy-shqd9: [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0927 00:36:51.846856  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.420948    1511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/476cb0de-772b-4e25-ac8c-7244a6d392e7-kube-api-access-pfjql podName:476cb0de-772b-4e25-ac8c-7244a6d392e7 nodeName:}" failed. No retries permitted until 2024-09-27 00:34:37.920927906 +0000 UTC m=+8.948793201 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pfjql" (UniqueName: "kubernetes.io/projected/476cb0de-772b-4e25-ac8c-7244a6d392e7-kube-api-access-pfjql") pod "kube-proxy-shqd9" (UID: "476cb0de-772b-4e25-ac8c-7244a6d392e7") : [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for th
e condition]
	I0927 00:36:51.904915  559927 logs.go:123] Gathering logs for dmesg ...
	I0927 00:36:51.904950  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 00:36:51.921815  559927 logs.go:123] Gathering logs for etcd [6b36b1e46732bafa997c4e66766a4bb0cd5ea7487006b7a6ba9e5860f1743a6e] ...
	I0927 00:36:51.921883  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b36b1e46732bafa997c4e66766a4bb0cd5ea7487006b7a6ba9e5860f1743a6e"
	I0927 00:36:51.982538  559927 logs.go:123] Gathering logs for coredns [1a8d7c13a871933275d3e84e87c063e55c9ed4adff23be36d5ea4bfa8accbcd6] ...
	I0927 00:36:51.982627  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a8d7c13a871933275d3e84e87c063e55c9ed4adff23be36d5ea4bfa8accbcd6"
	I0927 00:36:52.028370  559927 logs.go:123] Gathering logs for describe nodes ...
	I0927 00:36:52.028401  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 00:36:52.168300  559927 logs.go:123] Gathering logs for kube-apiserver [04b9c719c715f318e0da018097c22f147000bd0fb64d781731fa9eb3b3c51395] ...
	I0927 00:36:52.168332  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04b9c719c715f318e0da018097c22f147000bd0fb64d781731fa9eb3b3c51395"
	I0927 00:36:52.232001  559927 logs.go:123] Gathering logs for kube-scheduler [555dc55ff545e165f45bde68c31f0843d0f21041ba3fea37def560aea920dcc5] ...
	I0927 00:36:52.232037  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 555dc55ff545e165f45bde68c31f0843d0f21041ba3fea37def560aea920dcc5"
	I0927 00:36:52.282225  559927 logs.go:123] Gathering logs for kube-proxy [5e3fe54c99e931cc6b0b654e967a2638c30374abdabe2c1174d5f6a3fff11315] ...
	I0927 00:36:52.282254  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e3fe54c99e931cc6b0b654e967a2638c30374abdabe2c1174d5f6a3fff11315"
	I0927 00:36:52.325692  559927 out.go:358] Setting ErrFile to fd 2...
	I0927 00:36:52.325717  559927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 00:36:52.325772  559927 out.go:270] X Problems detected in kubelet:
	W0927 00:36:52.325789  559927 out.go:270]   Sep 27 00:34:35 addons-220192 kubelet[1511]: E0927 00:34:35.916217    1511 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-220192\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-220192' and this object" logger="UnhandledError"
	W0927 00:36:52.325804  559927 out.go:270]   Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.200959    1511 projected.go:194] Error preparing data for projected volume kube-api-access-8sq56 for pod kube-system/kindnet-4rr4t: [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0927 00:36:52.325811  559927 out.go:270]   Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.201058    1511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/afd40f83-7a79-4edc-bbfc-ff6936a3158e-kube-api-access-8sq56 podName:afd40f83-7a79-4edc-bbfc-ff6936a3158e nodeName:}" failed. No retries permitted until 2024-09-27 00:34:37.70103236 +0000 UTC m=+8.728897654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8sq56" (UniqueName: "kubernetes.io/projected/afd40f83-7a79-4edc-bbfc-ff6936a3158e-kube-api-access-8sq56") pod "kindnet-4rr4t" (UID: "afd40f83-7a79-4edc-bbfc-ff6936a3158e") : [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0927 00:36:52.325824  559927 out.go:270]   Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.420880    1511 projected.go:194] Error preparing data for projected volume kube-api-access-pfjql for pod kube-system/kube-proxy-shqd9: [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0927 00:36:52.325830  559927 out.go:270]   Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.420948    1511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/476cb0de-772b-4e25-ac8c-7244a6d392e7-kube-api-access-pfjql podName:476cb0de-772b-4e25-ac8c-7244a6d392e7 nodeName:}" failed. No retries permitted until 2024-09-27 00:34:37.920927906 +0000 UTC m=+8.948793201 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pfjql" (UniqueName: "kubernetes.io/projected/476cb0de-772b-4e25-ac8c-7244a6d392e7-kube-api-access-pfjql") pod "kube-proxy-shqd9" (UID: "476cb0de-772b-4e25-ac8c-7244a6d392e7") : [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	I0927 00:36:52.325836  559927 out.go:358] Setting ErrFile to fd 2...
	I0927 00:36:52.325846  559927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:37:02.327724  559927 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0927 00:37:02.335228  559927 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0927 00:37:02.336164  559927 api_server.go:141] control plane version: v1.31.1
	I0927 00:37:02.336197  559927 api_server.go:131] duration metric: took 11.142248149s to wait for apiserver health ...
	I0927 00:37:02.336207  559927 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 00:37:02.336227  559927 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 00:37:02.336293  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 00:37:02.373662  559927 cri.go:89] found id: "04b9c719c715f318e0da018097c22f147000bd0fb64d781731fa9eb3b3c51395"
	I0927 00:37:02.373688  559927 cri.go:89] found id: ""
	I0927 00:37:02.373696  559927 logs.go:276] 1 containers: [04b9c719c715f318e0da018097c22f147000bd0fb64d781731fa9eb3b3c51395]
	I0927 00:37:02.373750  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:37:02.377092  559927 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 00:37:02.377160  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 00:37:02.414236  559927 cri.go:89] found id: "6b36b1e46732bafa997c4e66766a4bb0cd5ea7487006b7a6ba9e5860f1743a6e"
	I0927 00:37:02.414265  559927 cri.go:89] found id: ""
	I0927 00:37:02.414279  559927 logs.go:276] 1 containers: [6b36b1e46732bafa997c4e66766a4bb0cd5ea7487006b7a6ba9e5860f1743a6e]
	I0927 00:37:02.414335  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:37:02.417663  559927 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 00:37:02.417741  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 00:37:02.468306  559927 cri.go:89] found id: "1a8d7c13a871933275d3e84e87c063e55c9ed4adff23be36d5ea4bfa8accbcd6"
	I0927 00:37:02.468327  559927 cri.go:89] found id: ""
	I0927 00:37:02.468335  559927 logs.go:276] 1 containers: [1a8d7c13a871933275d3e84e87c063e55c9ed4adff23be36d5ea4bfa8accbcd6]
	I0927 00:37:02.468389  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:37:02.471964  559927 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 00:37:02.472034  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 00:37:02.512245  559927 cri.go:89] found id: "555dc55ff545e165f45bde68c31f0843d0f21041ba3fea37def560aea920dcc5"
	I0927 00:37:02.512267  559927 cri.go:89] found id: ""
	I0927 00:37:02.512275  559927 logs.go:276] 1 containers: [555dc55ff545e165f45bde68c31f0843d0f21041ba3fea37def560aea920dcc5]
	I0927 00:37:02.512330  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:37:02.515876  559927 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 00:37:02.515968  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 00:37:02.552023  559927 cri.go:89] found id: "5e3fe54c99e931cc6b0b654e967a2638c30374abdabe2c1174d5f6a3fff11315"
	I0927 00:37:02.552047  559927 cri.go:89] found id: ""
	I0927 00:37:02.552055  559927 logs.go:276] 1 containers: [5e3fe54c99e931cc6b0b654e967a2638c30374abdabe2c1174d5f6a3fff11315]
	I0927 00:37:02.552110  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:37:02.555592  559927 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 00:37:02.555670  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 00:37:02.601327  559927 cri.go:89] found id: "2bfc8d78fdf58256c4a5925537af21cdbf3dbd66127f8a15b8101f92fb8a78c9"
	I0927 00:37:02.601351  559927 cri.go:89] found id: ""
	I0927 00:37:02.601359  559927 logs.go:276] 1 containers: [2bfc8d78fdf58256c4a5925537af21cdbf3dbd66127f8a15b8101f92fb8a78c9]
	I0927 00:37:02.601447  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:37:02.604953  559927 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 00:37:02.605044  559927 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 00:37:02.642635  559927 cri.go:89] found id: "d7a7261efecf3162ccc2d26ed432451c900af8b4d1487407d7ce2be5094281b5"
	I0927 00:37:02.642660  559927 cri.go:89] found id: ""
	I0927 00:37:02.642668  559927 logs.go:276] 1 containers: [d7a7261efecf3162ccc2d26ed432451c900af8b4d1487407d7ce2be5094281b5]
	I0927 00:37:02.642789  559927 ssh_runner.go:195] Run: which crictl
	I0927 00:37:02.646380  559927 logs.go:123] Gathering logs for kube-controller-manager [2bfc8d78fdf58256c4a5925537af21cdbf3dbd66127f8a15b8101f92fb8a78c9] ...
	I0927 00:37:02.646406  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2bfc8d78fdf58256c4a5925537af21cdbf3dbd66127f8a15b8101f92fb8a78c9"
	I0927 00:37:02.718917  559927 logs.go:123] Gathering logs for kindnet [d7a7261efecf3162ccc2d26ed432451c900af8b4d1487407d7ce2be5094281b5] ...
	I0927 00:37:02.718956  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d7a7261efecf3162ccc2d26ed432451c900af8b4d1487407d7ce2be5094281b5"
	I0927 00:37:02.761541  559927 logs.go:123] Gathering logs for container status ...
	I0927 00:37:02.761572  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 00:37:02.809548  559927 logs.go:123] Gathering logs for kubelet ...
	I0927 00:37:02.809580  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 00:37:02.853630  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:35 addons-220192 kubelet[1511]: W0927 00:34:35.883351    1511 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-220192" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object
	W0927 00:37:02.853910  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:35 addons-220192 kubelet[1511]: E0927 00:34:35.883402    1511 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-220192\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-220192' and this object" logger="UnhandledError"
	W0927 00:37:02.854104  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:35 addons-220192 kubelet[1511]: W0927 00:34:35.916164    1511 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-220192" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object
	W0927 00:37:02.854331  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:35 addons-220192 kubelet[1511]: E0927 00:34:35.916217    1511 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-220192\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-220192' and this object" logger="UnhandledError"
	W0927 00:37:02.855706  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.200959    1511 projected.go:194] Error preparing data for projected volume kube-api-access-8sq56 for pod kube-system/kindnet-4rr4t: [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0927 00:37:02.856214  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.201058    1511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/afd40f83-7a79-4edc-bbfc-ff6936a3158e-kube-api-access-8sq56 podName:afd40f83-7a79-4edc-bbfc-ff6936a3158e nodeName:}" failed. No retries permitted until 2024-09-27 00:34:37.70103236 +0000 UTC m=+8.728897654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8sq56" (UniqueName: "kubernetes.io/projected/afd40f83-7a79-4edc-bbfc-ff6936a3158e-kube-api-access-8sq56") pod "kindnet-4rr4t" (UID: "afd40f83-7a79-4edc-bbfc-ff6936a3158e") : [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condi
tion]
	W0927 00:37:02.856573  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.420880    1511 projected.go:194] Error preparing data for projected volume kube-api-access-pfjql for pod kube-system/kube-proxy-shqd9: [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0927 00:37:02.857089  559927 logs.go:138] Found kubelet problem: Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.420948    1511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/476cb0de-772b-4e25-ac8c-7244a6d392e7-kube-api-access-pfjql podName:476cb0de-772b-4e25-ac8c-7244a6d392e7 nodeName:}" failed. No retries permitted until 2024-09-27 00:34:37.920927906 +0000 UTC m=+8.948793201 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pfjql" (UniqueName: "kubernetes.io/projected/476cb0de-772b-4e25-ac8c-7244a6d392e7-kube-api-access-pfjql") pod "kube-proxy-shqd9" (UID: "476cb0de-772b-4e25-ac8c-7244a6d392e7") : [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for th
e condition]
	I0927 00:37:02.916418  559927 logs.go:123] Gathering logs for dmesg ...
	I0927 00:37:02.916455  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 00:37:02.932480  559927 logs.go:123] Gathering logs for kube-apiserver [04b9c719c715f318e0da018097c22f147000bd0fb64d781731fa9eb3b3c51395] ...
	I0927 00:37:02.932508  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04b9c719c715f318e0da018097c22f147000bd0fb64d781731fa9eb3b3c51395"
	I0927 00:37:03.002890  559927 logs.go:123] Gathering logs for kube-scheduler [555dc55ff545e165f45bde68c31f0843d0f21041ba3fea37def560aea920dcc5] ...
	I0927 00:37:03.002926  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 555dc55ff545e165f45bde68c31f0843d0f21041ba3fea37def560aea920dcc5"
	I0927 00:37:03.049813  559927 logs.go:123] Gathering logs for kube-proxy [5e3fe54c99e931cc6b0b654e967a2638c30374abdabe2c1174d5f6a3fff11315] ...
	I0927 00:37:03.049846  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e3fe54c99e931cc6b0b654e967a2638c30374abdabe2c1174d5f6a3fff11315"
	I0927 00:37:03.093274  559927 logs.go:123] Gathering logs for describe nodes ...
	I0927 00:37:03.093302  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 00:37:03.235228  559927 logs.go:123] Gathering logs for etcd [6b36b1e46732bafa997c4e66766a4bb0cd5ea7487006b7a6ba9e5860f1743a6e] ...
	I0927 00:37:03.235262  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b36b1e46732bafa997c4e66766a4bb0cd5ea7487006b7a6ba9e5860f1743a6e"
	I0927 00:37:03.286098  559927 logs.go:123] Gathering logs for coredns [1a8d7c13a871933275d3e84e87c063e55c9ed4adff23be36d5ea4bfa8accbcd6] ...
	I0927 00:37:03.286134  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a8d7c13a871933275d3e84e87c063e55c9ed4adff23be36d5ea4bfa8accbcd6"
	I0927 00:37:03.330375  559927 logs.go:123] Gathering logs for CRI-O ...
	I0927 00:37:03.330463  559927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 00:37:03.436949  559927 out.go:358] Setting ErrFile to fd 2...
	I0927 00:37:03.436986  559927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 00:37:03.437055  559927 out.go:270] X Problems detected in kubelet:
	W0927 00:37:03.437072  559927 out.go:270]   Sep 27 00:34:35 addons-220192 kubelet[1511]: E0927 00:34:35.916217    1511 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-220192\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-220192' and this object" logger="UnhandledError"
	W0927 00:37:03.437086  559927 out.go:270]   Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.200959    1511 projected.go:194] Error preparing data for projected volume kube-api-access-8sq56 for pod kube-system/kindnet-4rr4t: [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0927 00:37:03.437094  559927 out.go:270]   Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.201058    1511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/afd40f83-7a79-4edc-bbfc-ff6936a3158e-kube-api-access-8sq56 podName:afd40f83-7a79-4edc-bbfc-ff6936a3158e nodeName:}" failed. No retries permitted until 2024-09-27 00:34:37.70103236 +0000 UTC m=+8.728897654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8sq56" (UniqueName: "kubernetes.io/projected/afd40f83-7a79-4edc-bbfc-ff6936a3158e-kube-api-access-8sq56") pod "kindnet-4rr4t" (UID: "afd40f83-7a79-4edc-bbfc-ff6936a3158e") : [failed to fetch token: serviceaccounts "kindnet" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0927 00:37:03.437105  559927 out.go:270]   Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.420880    1511 projected.go:194] Error preparing data for projected volume kube-api-access-pfjql for pod kube-system/kube-proxy-shqd9: [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	W0927 00:37:03.437111  559927 out.go:270]   Sep 27 00:34:37 addons-220192 kubelet[1511]: E0927 00:34:37.420948    1511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/476cb0de-772b-4e25-ac8c-7244a6d392e7-kube-api-access-pfjql podName:476cb0de-772b-4e25-ac8c-7244a6d392e7 nodeName:}" failed. No retries permitted until 2024-09-27 00:34:37.920927906 +0000 UTC m=+8.948793201 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pfjql" (UniqueName: "kubernetes.io/projected/476cb0de-772b-4e25-ac8c-7244a6d392e7-kube-api-access-pfjql") pod "kube-proxy-shqd9" (UID: "476cb0de-772b-4e25-ac8c-7244a6d392e7") : [failed to fetch token: serviceaccounts "kube-proxy" is forbidden: User "system:node:addons-220192" cannot create resource "serviceaccounts/token" in API group "" in the namespace "kube-system": no relationship found between node 'addons-220192' and this object, failed to sync configmap cache: timed out waiting for the condition]
	I0927 00:37:03.437117  559927 out.go:358] Setting ErrFile to fd 2...
	I0927 00:37:03.437124  559927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:37:13.449086  559927 system_pods.go:59] 18 kube-system pods found
	I0927 00:37:13.449126  559927 system_pods.go:61] "coredns-7c65d6cfc9-wnhpd" [4f3b2231-030c-4af9-beae-7c98c13d01cd] Running
	I0927 00:37:13.449134  559927 system_pods.go:61] "csi-hostpath-attacher-0" [c49fd5b5-341f-441f-981c-70e3f7bccbff] Running
	I0927 00:37:13.449139  559927 system_pods.go:61] "csi-hostpath-resizer-0" [21888ecf-1320-496d-97d5-a0c1e85ce981] Running
	I0927 00:37:13.449143  559927 system_pods.go:61] "csi-hostpathplugin-pst4l" [ae3ecba5-af16-41fb-a4c3-bf2c43689e50] Running
	I0927 00:37:13.449148  559927 system_pods.go:61] "etcd-addons-220192" [94827fa0-c442-4e24-a83e-22de3bff65e3] Running
	I0927 00:37:13.449152  559927 system_pods.go:61] "kindnet-4rr4t" [afd40f83-7a79-4edc-bbfc-ff6936a3158e] Running
	I0927 00:37:13.449157  559927 system_pods.go:61] "kube-apiserver-addons-220192" [0bec6c78-990c-4ffb-be43-dfb155b147f7] Running
	I0927 00:37:13.449161  559927 system_pods.go:61] "kube-controller-manager-addons-220192" [1353546b-84d9-4cd3-938e-6734b6b3413b] Running
	I0927 00:37:13.449172  559927 system_pods.go:61] "kube-ingress-dns-minikube" [586c242e-8199-4142-985e-e89f7d01e3cc] Running
	I0927 00:37:13.449178  559927 system_pods.go:61] "kube-proxy-shqd9" [476cb0de-772b-4e25-ac8c-7244a6d392e7] Running
	I0927 00:37:13.449186  559927 system_pods.go:61] "kube-scheduler-addons-220192" [c391b3f7-ca7f-48e9-9cec-7188a266035f] Running
	I0927 00:37:13.449190  559927 system_pods.go:61] "metrics-server-84c5f94fbc-zpbj2" [1a96d0d6-2c40-4cd4-ba04-605e67d179f7] Running
	I0927 00:37:13.449195  559927 system_pods.go:61] "nvidia-device-plugin-daemonset-dqrvw" [e6729774-57a9-49c2-a405-b1a541551dd4] Running
	I0927 00:37:13.449199  559927 system_pods.go:61] "registry-66c9cd494c-7997r" [06852bd1-3230-4615-b6a1-8834e426e02d] Running
	I0927 00:37:13.449203  559927 system_pods.go:61] "registry-proxy-ld2gg" [44a3013c-bbfc-4d08-9ed4-a5160422cdf0] Running
	I0927 00:37:13.449210  559927 system_pods.go:61] "snapshot-controller-56fcc65765-b4j5p" [de8a8d5b-ab34-41cb-ac84-b1c9dd58a1ff] Running
	I0927 00:37:13.449215  559927 system_pods.go:61] "snapshot-controller-56fcc65765-w6xf7" [e8e9ea4c-ac11-4dc7-85aa-75c8b2eb463e] Running
	I0927 00:37:13.449221  559927 system_pods.go:61] "storage-provisioner" [20b521d2-cf72-4c64-997c-c30b932659a1] Running
	I0927 00:37:13.449227  559927 system_pods.go:74] duration metric: took 11.113013969s to wait for pod list to return data ...
	I0927 00:37:13.449235  559927 default_sa.go:34] waiting for default service account to be created ...
	I0927 00:37:13.451765  559927 default_sa.go:45] found service account: "default"
	I0927 00:37:13.451791  559927 default_sa.go:55] duration metric: took 2.546967ms for default service account to be created ...
	I0927 00:37:13.451801  559927 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 00:37:13.461994  559927 system_pods.go:86] 18 kube-system pods found
	I0927 00:37:13.462032  559927 system_pods.go:89] "coredns-7c65d6cfc9-wnhpd" [4f3b2231-030c-4af9-beae-7c98c13d01cd] Running
	I0927 00:37:13.462039  559927 system_pods.go:89] "csi-hostpath-attacher-0" [c49fd5b5-341f-441f-981c-70e3f7bccbff] Running
	I0927 00:37:13.462045  559927 system_pods.go:89] "csi-hostpath-resizer-0" [21888ecf-1320-496d-97d5-a0c1e85ce981] Running
	I0927 00:37:13.462050  559927 system_pods.go:89] "csi-hostpathplugin-pst4l" [ae3ecba5-af16-41fb-a4c3-bf2c43689e50] Running
	I0927 00:37:13.462054  559927 system_pods.go:89] "etcd-addons-220192" [94827fa0-c442-4e24-a83e-22de3bff65e3] Running
	I0927 00:37:13.462059  559927 system_pods.go:89] "kindnet-4rr4t" [afd40f83-7a79-4edc-bbfc-ff6936a3158e] Running
	I0927 00:37:13.462063  559927 system_pods.go:89] "kube-apiserver-addons-220192" [0bec6c78-990c-4ffb-be43-dfb155b147f7] Running
	I0927 00:37:13.462091  559927 system_pods.go:89] "kube-controller-manager-addons-220192" [1353546b-84d9-4cd3-938e-6734b6b3413b] Running
	I0927 00:37:13.462098  559927 system_pods.go:89] "kube-ingress-dns-minikube" [586c242e-8199-4142-985e-e89f7d01e3cc] Running
	I0927 00:37:13.462112  559927 system_pods.go:89] "kube-proxy-shqd9" [476cb0de-772b-4e25-ac8c-7244a6d392e7] Running
	I0927 00:37:13.462117  559927 system_pods.go:89] "kube-scheduler-addons-220192" [c391b3f7-ca7f-48e9-9cec-7188a266035f] Running
	I0927 00:37:13.462121  559927 system_pods.go:89] "metrics-server-84c5f94fbc-zpbj2" [1a96d0d6-2c40-4cd4-ba04-605e67d179f7] Running
	I0927 00:37:13.462131  559927 system_pods.go:89] "nvidia-device-plugin-daemonset-dqrvw" [e6729774-57a9-49c2-a405-b1a541551dd4] Running
	I0927 00:37:13.462136  559927 system_pods.go:89] "registry-66c9cd494c-7997r" [06852bd1-3230-4615-b6a1-8834e426e02d] Running
	I0927 00:37:13.462142  559927 system_pods.go:89] "registry-proxy-ld2gg" [44a3013c-bbfc-4d08-9ed4-a5160422cdf0] Running
	I0927 00:37:13.462149  559927 system_pods.go:89] "snapshot-controller-56fcc65765-b4j5p" [de8a8d5b-ab34-41cb-ac84-b1c9dd58a1ff] Running
	I0927 00:37:13.462179  559927 system_pods.go:89] "snapshot-controller-56fcc65765-w6xf7" [e8e9ea4c-ac11-4dc7-85aa-75c8b2eb463e] Running
	I0927 00:37:13.462189  559927 system_pods.go:89] "storage-provisioner" [20b521d2-cf72-4c64-997c-c30b932659a1] Running
	I0927 00:37:13.462197  559927 system_pods.go:126] duration metric: took 10.389744ms to wait for k8s-apps to be running ...
	I0927 00:37:13.462204  559927 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 00:37:13.462274  559927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 00:37:13.475870  559927 system_svc.go:56] duration metric: took 13.657024ms WaitForService to wait for kubelet
	I0927 00:37:13.475900  559927 kubeadm.go:582] duration metric: took 2m40.094897458s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 00:37:13.475921  559927 node_conditions.go:102] verifying NodePressure condition ...
	I0927 00:37:13.479550  559927 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0927 00:37:13.479579  559927 node_conditions.go:123] node cpu capacity is 2
	I0927 00:37:13.479592  559927 node_conditions.go:105] duration metric: took 3.664619ms to run NodePressure ...
	I0927 00:37:13.479604  559927 start.go:241] waiting for startup goroutines ...
	I0927 00:37:13.479611  559927 start.go:246] waiting for cluster config update ...
	I0927 00:37:13.479628  559927 start.go:255] writing updated cluster config ...
	I0927 00:37:13.479920  559927 ssh_runner.go:195] Run: rm -f paused
	I0927 00:37:13.906550  559927 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0927 00:37:13.908395  559927 out.go:177] * Done! kubectl is now configured to use "addons-220192" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 27 00:50:34 addons-220192 crio[964]: time="2024-09-27 00:50:34.073492628Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=9bcebbd0-ebdc-45a4-8258-65925eae7ec3 name=/runtime.v1.ImageService/ImageStatus
	Sep 27 00:50:46 addons-220192 crio[964]: time="2024-09-27 00:50:46.073396244Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=56b62b48-9028-4393-8e8a-1d0138174b23 name=/runtime.v1.ImageService/ImageStatus
	Sep 27 00:50:46 addons-220192 crio[964]: time="2024-09-27 00:50:46.073627738Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=56b62b48-9028-4393-8e8a-1d0138174b23 name=/runtime.v1.ImageService/ImageStatus
	Sep 27 00:50:59 addons-220192 crio[964]: time="2024-09-27 00:50:59.074441101Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1adbfa5f-3263-4e28-afdc-4f0014f220d2 name=/runtime.v1.ImageService/ImageStatus
	Sep 27 00:50:59 addons-220192 crio[964]: time="2024-09-27 00:50:59.074672857Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=1adbfa5f-3263-4e28-afdc-4f0014f220d2 name=/runtime.v1.ImageService/ImageStatus
	Sep 27 00:51:11 addons-220192 crio[964]: time="2024-09-27 00:51:11.074121467Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f6e5329f-6268-4f14-bea5-5e91dc6a78e4 name=/runtime.v1.ImageService/ImageStatus
	Sep 27 00:51:11 addons-220192 crio[964]: time="2024-09-27 00:51:11.074358933Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=f6e5329f-6268-4f14-bea5-5e91dc6a78e4 name=/runtime.v1.ImageService/ImageStatus
	Sep 27 00:51:25 addons-220192 crio[964]: time="2024-09-27 00:51:25.073746638Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4e261052-6f73-461e-9d33-9720f7ba7bf8 name=/runtime.v1.ImageService/ImageStatus
	Sep 27 00:51:25 addons-220192 crio[964]: time="2024-09-27 00:51:25.073994910Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=4e261052-6f73-461e-9d33-9720f7ba7bf8 name=/runtime.v1.ImageService/ImageStatus
	Sep 27 00:51:38 addons-220192 crio[964]: time="2024-09-27 00:51:38.074028060Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=474a6be9-5328-42d0-9ee7-c96f51dc9ed0 name=/runtime.v1.ImageService/ImageStatus
	Sep 27 00:51:38 addons-220192 crio[964]: time="2024-09-27 00:51:38.074262023Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=474a6be9-5328-42d0-9ee7-c96f51dc9ed0 name=/runtime.v1.ImageService/ImageStatus
	Sep 27 00:51:49 addons-220192 crio[964]: time="2024-09-27 00:51:49.073817724Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d47751bc-2f6a-453a-a696-39b945a1e6e9 name=/runtime.v1.ImageService/ImageStatus
	Sep 27 00:51:49 addons-220192 crio[964]: time="2024-09-27 00:51:49.074045902Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=d47751bc-2f6a-453a-a696-39b945a1e6e9 name=/runtime.v1.ImageService/ImageStatus
	Sep 27 00:52:02 addons-220192 crio[964]: time="2024-09-27 00:52:02.073456814Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=10d12bfb-0cfc-4dd8-ac05-c66f483c9887 name=/runtime.v1.ImageService/ImageStatus
	Sep 27 00:52:02 addons-220192 crio[964]: time="2024-09-27 00:52:02.073692615Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=10d12bfb-0cfc-4dd8-ac05-c66f483c9887 name=/runtime.v1.ImageService/ImageStatus
	Sep 27 00:52:16 addons-220192 crio[964]: time="2024-09-27 00:52:16.074111045Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6a972981-2852-479f-a66f-a1b13f060228 name=/runtime.v1.ImageService/ImageStatus
	Sep 27 00:52:16 addons-220192 crio[964]: time="2024-09-27 00:52:16.074354648Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=6a972981-2852-479f-a66f-a1b13f060228 name=/runtime.v1.ImageService/ImageStatus
	Sep 27 00:52:21 addons-220192 crio[964]: time="2024-09-27 00:52:21.235717678Z" level=info msg="Stopping container: 880e241766c14d56056faa27bcd39b8d8c163f04f5b2ee16f9aff92a8c542fce (timeout: 30s)" id=c37e96ed-8896-43c7-a3b8-2a7cfb067b2f name=/runtime.v1.RuntimeService/StopContainer
	Sep 27 00:52:22 addons-220192 crio[964]: time="2024-09-27 00:52:22.392774468Z" level=info msg="Stopped container 880e241766c14d56056faa27bcd39b8d8c163f04f5b2ee16f9aff92a8c542fce: kube-system/metrics-server-84c5f94fbc-zpbj2/metrics-server" id=c37e96ed-8896-43c7-a3b8-2a7cfb067b2f name=/runtime.v1.RuntimeService/StopContainer
	Sep 27 00:52:22 addons-220192 crio[964]: time="2024-09-27 00:52:22.393585950Z" level=info msg="Stopping pod sandbox: 8cbcf8b4931cdbacaf88e47e6abc956e30603c3754f1659a208bb17bfb32ff53" id=da528e40-edfc-4b24-bf50-c967c840a66a name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 27 00:52:22 addons-220192 crio[964]: time="2024-09-27 00:52:22.393808139Z" level=info msg="Got pod network &{Name:metrics-server-84c5f94fbc-zpbj2 Namespace:kube-system ID:8cbcf8b4931cdbacaf88e47e6abc956e30603c3754f1659a208bb17bfb32ff53 UID:1a96d0d6-2c40-4cd4-ba04-605e67d179f7 NetNS:/var/run/netns/c71624eb-0984-4443-a5c4-961faeba0fe8 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 27 00:52:22 addons-220192 crio[964]: time="2024-09-27 00:52:22.393955105Z" level=info msg="Deleting pod kube-system_metrics-server-84c5f94fbc-zpbj2 from CNI network \"kindnet\" (type=ptp)"
	Sep 27 00:52:22 addons-220192 crio[964]: time="2024-09-27 00:52:22.441037952Z" level=info msg="Stopped pod sandbox: 8cbcf8b4931cdbacaf88e47e6abc956e30603c3754f1659a208bb17bfb32ff53" id=da528e40-edfc-4b24-bf50-c967c840a66a name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 27 00:52:22 addons-220192 crio[964]: time="2024-09-27 00:52:22.461200078Z" level=info msg="Removing container: 880e241766c14d56056faa27bcd39b8d8c163f04f5b2ee16f9aff92a8c542fce" id=88f05a7d-e024-427b-a0bb-40104bfe4081 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 27 00:52:22 addons-220192 crio[964]: time="2024-09-27 00:52:22.500243658Z" level=info msg="Removed container 880e241766c14d56056faa27bcd39b8d8c163f04f5b2ee16f9aff92a8c542fce: kube-system/metrics-server-84c5f94fbc-zpbj2/metrics-server" id=88f05a7d-e024-427b-a0bb-40104bfe4081 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	777cf3576774f       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6              3 minutes ago       Running             hello-world-app           0                   31b2ecd44b5d3       hello-world-app-55bf9c44b4-4f9hl
	deef59e3d12a1       docker.io/library/nginx@sha256:19db381c08a95b2040d5637a65c7a59af6c2f21444b0c8730505280a0255fb53                    5 minutes ago       Running             nginx                     0                   a2699501fe7b9       nginx
	f79bc824b8278       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69       16 minutes ago      Running             gcp-auth                  0                   c3d022a3b14c6       gcp-auth-89d5ffd79-6m9rp
	bc524d9595882       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98   16 minutes ago      Running             local-path-provisioner    0                   ec2cf1c475ba2       local-path-provisioner-86d989889c-7czzf
	75b98e47380ef       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                   17 minutes ago      Running             storage-provisioner       0                   794276bcaa01b       storage-provisioner
	1a8d7c13a8719       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                   17 minutes ago      Running             coredns                   0                   ef54c3fa3cd28       coredns-7c65d6cfc9-wnhpd
	5e3fe54c99e93       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d                                                   17 minutes ago      Running             kube-proxy                0                   16758e5c05deb       kube-proxy-shqd9
	d7a7261efecf3       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51                                                   17 minutes ago      Running             kindnet-cni               0                   39c54e6136da4       kindnet-4rr4t
	04b9c719c715f       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853                                                   18 minutes ago      Running             kube-apiserver            0                   e263f38ae3b5e       kube-apiserver-addons-220192
	555dc55ff545e       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d                                                   18 minutes ago      Running             kube-scheduler            0                   e432a0cbdf14f       kube-scheduler-addons-220192
	2bfc8d78fdf58       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e                                                   18 minutes ago      Running             kube-controller-manager   0                   75ef397915466       kube-controller-manager-addons-220192
	6b36b1e46732b       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                   18 minutes ago      Running             etcd                      0                   8a08dc7f6d87c       etcd-addons-220192
	
	
	==> coredns [1a8d7c13a871933275d3e84e87c063e55c9ed4adff23be36d5ea4bfa8accbcd6] <==
	[INFO] 10.244.0.17:32921 - 15145 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00009599s
	[INFO] 10.244.0.17:32921 - 19537 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002489894s
	[INFO] 10.244.0.17:32921 - 61082 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002491617s
	[INFO] 10.244.0.17:32921 - 31100 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000128301s
	[INFO] 10.244.0.17:32921 - 35939 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000126651s
	[INFO] 10.244.0.17:41730 - 50927 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000109577s
	[INFO] 10.244.0.17:41730 - 51164 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000183225s
	[INFO] 10.244.0.17:33425 - 39515 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000088917s
	[INFO] 10.244.0.17:33425 - 39334 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000158479s
	[INFO] 10.244.0.17:42680 - 3435 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000165895s
	[INFO] 10.244.0.17:42680 - 3246 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000204483s
	[INFO] 10.244.0.17:41066 - 45139 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001539254s
	[INFO] 10.244.0.17:41066 - 44967 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001594653s
	[INFO] 10.244.0.17:35895 - 35537 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000064679s
	[INFO] 10.244.0.17:35895 - 35134 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000060282s
	[INFO] 10.244.0.20:38814 - 12571 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000166667s
	[INFO] 10.244.0.20:57837 - 31175 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000084199s
	[INFO] 10.244.0.20:59015 - 52667 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000144571s
	[INFO] 10.244.0.20:43948 - 22611 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000081081s
	[INFO] 10.244.0.20:39471 - 5951 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000114837s
	[INFO] 10.244.0.20:53453 - 53244 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000079014s
	[INFO] 10.244.0.20:50375 - 42686 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002646412s
	[INFO] 10.244.0.20:38002 - 62070 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002044169s
	[INFO] 10.244.0.20:54992 - 48913 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001395109s
	[INFO] 10.244.0.20:42555 - 4765 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002338735s
	
	
	==> describe nodes <==
	Name:               addons-220192
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-220192
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=addons-220192
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_27T00_34_30_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-220192
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:34:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-220192
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 00:52:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 00:50:07 +0000   Fri, 27 Sep 2024 00:34:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 00:50:07 +0000   Fri, 27 Sep 2024 00:34:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 00:50:07 +0000   Fri, 27 Sep 2024 00:34:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 00:50:07 +0000   Fri, 27 Sep 2024 00:35:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-220192
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 6db0b236675141869357d8bd6acda62f
	  System UUID:                96d22be3-917a-4ba2-9d29-91009fed055d
	  Boot ID:                    7df4580f-f941-474d-8050-3bbd7f78d321
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  default                     hello-world-app-55bf9c44b4-4f9hl           0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m21s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m40s
	  gcp-auth                    gcp-auth-89d5ffd79-6m9rp                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 coredns-7c65d6cfc9-wnhpd                   100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     17m
	  kube-system                 etcd-addons-220192                         100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         17m
	  kube-system                 kindnet-4rr4t                              100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-apiserver-addons-220192               250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-addons-220192      200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-shqd9                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-addons-220192               100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  local-path-storage          local-path-provisioner-86d989889c-7czzf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 17m   kube-proxy       
	  Normal   Starting                 17m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 17m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  17m   kubelet          Node addons-220192 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m   kubelet          Node addons-220192 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m   kubelet          Node addons-220192 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           17m   node-controller  Node addons-220192 event: Registered Node addons-220192 in Controller
	  Normal   NodeReady                17m   kubelet          Node addons-220192 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep26 22:08] overlayfs: '/var/lib/containers/storage/overlay/l/ZLTOCNGE2IGM6DT7VP2QP7OV3M' not a directory
	[  +0.694148] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	[Sep27 00:06] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [6b36b1e46732bafa997c4e66766a4bb0cd5ea7487006b7a6ba9e5860f1743a6e] <==
	{"level":"info","ts":"2024-09-27T00:34:36.705678Z","caller":"traceutil/trace.go:171","msg":"trace[75754528] transaction","detail":"{read_only:false; response_revision:337; number_of_response:1; }","duration":"123.945357ms","start":"2024-09-27T00:34:36.581722Z","end":"2024-09-27T00:34:36.705667Z","steps":["trace[75754528] 'process raft request'  (duration: 83.586816ms)","trace[75754528] 'compare'  (duration: 37.390308ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-27T00:34:36.707643Z","caller":"traceutil/trace.go:171","msg":"trace[1978378721] transaction","detail":"{read_only:false; response_revision:340; number_of_response:1; }","duration":"119.988454ms","start":"2024-09-27T00:34:36.587640Z","end":"2024-09-27T00:34:36.707629Z","steps":["trace[1978378721] 'process raft request'  (duration: 115.241317ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:34:36.707788Z","caller":"traceutil/trace.go:171","msg":"trace[245549885] transaction","detail":"{read_only:false; response_revision:339; number_of_response:1; }","duration":"125.39105ms","start":"2024-09-27T00:34:36.582391Z","end":"2024-09-27T00:34:36.707782Z","steps":["trace[245549885] 'process raft request'  (duration: 120.456628ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:34:36.708110Z","caller":"traceutil/trace.go:171","msg":"trace[386138567] transaction","detail":"{read_only:false; response_revision:341; number_of_response:1; }","duration":"101.159781ms","start":"2024-09-27T00:34:36.606943Z","end":"2024-09-27T00:34:36.708103Z","steps":["trace[386138567] 'process raft request'  (duration: 95.968996ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:34:36.708135Z","caller":"traceutil/trace.go:171","msg":"trace[1196894315] linearizableReadLoop","detail":"{readStateIndex:349; appliedIndex:344; }","duration":"118.831173ms","start":"2024-09-27T00:34:36.589299Z","end":"2024-09-27T00:34:36.708130Z","steps":["trace[1196894315] 'read index received'  (duration: 75.87577ms)","trace[1196894315] 'applied index is now lower than readState.Index'  (duration: 42.954746ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-27T00:34:36.708195Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"157.336367ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-09-27T00:34:36.761229Z","caller":"traceutil/trace.go:171","msg":"trace[1688764860] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:342; }","duration":"210.35389ms","start":"2024-09-27T00:34:36.550840Z","end":"2024-09-27T00:34:36.761194Z","steps":["trace[1688764860] 'agreement among raft nodes before linearized reading'  (duration: 157.305008ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:34:36.708247Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.11481ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T00:34:36.767505Z","caller":"traceutil/trace.go:171","msg":"trace[1525429030] range","detail":"{range_begin:/registry/resourcequotas; range_end:; response_count:0; response_revision:342; }","duration":"160.358433ms","start":"2024-09-27T00:34:36.607124Z","end":"2024-09-27T00:34:36.767483Z","steps":["trace[1525429030] 'agreement among raft nodes before linearized reading'  (duration: 101.104668ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:34:36.708321Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.890179ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2024-09-27T00:34:36.767882Z","caller":"traceutil/trace.go:171","msg":"trace[1578121818] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:342; }","duration":"212.443063ms","start":"2024-09-27T00:34:36.555427Z","end":"2024-09-27T00:34:36.767870Z","steps":["trace[1578121818] 'agreement among raft nodes before linearized reading'  (duration: 152.878143ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:34:36.708269Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.946598ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T00:34:36.768392Z","caller":"traceutil/trace.go:171","msg":"trace[605102501] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:342; }","duration":"186.060789ms","start":"2024-09-27T00:34:36.582319Z","end":"2024-09-27T00:34:36.768380Z","steps":["trace[605102501] 'agreement among raft nodes before linearized reading'  (duration: 125.936571ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:34:36.708296Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.170953ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:3684"}
	{"level":"warn","ts":"2024-09-27T00:34:36.718091Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"167.195588ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-09-27T00:34:36.783455Z","caller":"traceutil/trace.go:171","msg":"trace[448706870] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:342; }","duration":"232.572784ms","start":"2024-09-27T00:34:36.550868Z","end":"2024-09-27T00:34:36.783441Z","steps":["trace[448706870] 'agreement among raft nodes before linearized reading'  (duration: 167.157426ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:34:36.784208Z","caller":"traceutil/trace.go:171","msg":"trace[557653940] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:342; }","duration":"202.077926ms","start":"2024-09-27T00:34:36.582121Z","end":"2024-09-27T00:34:36.784199Z","steps":["trace[557653940] 'agreement among raft nodes before linearized reading'  (duration: 126.155561ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:34:37.377526Z","caller":"traceutil/trace.go:171","msg":"trace[877986833] transaction","detail":"{read_only:false; response_revision:364; number_of_response:1; }","duration":"141.52327ms","start":"2024-09-27T00:34:37.235983Z","end":"2024-09-27T00:34:37.377506Z","steps":["trace[877986833] 'process raft request'  (duration: 50.41116ms)","trace[877986833] 'compare'  (duration: 90.99976ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-27T00:34:37.378234Z","caller":"traceutil/trace.go:171","msg":"trace[1370352228] transaction","detail":"{read_only:false; response_revision:365; number_of_response:1; }","duration":"141.929496ms","start":"2024-09-27T00:34:37.236293Z","end":"2024-09-27T00:34:37.378223Z","steps":["trace[1370352228] 'process raft request'  (duration: 141.866039ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:44:24.160820Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1524}
	{"level":"info","ts":"2024-09-27T00:44:24.194054Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1524,"took":"32.739963ms","hash":154592831,"current-db-size-bytes":6713344,"current-db-size":"6.7 MB","current-db-size-in-use-bytes":3227648,"current-db-size-in-use":"3.2 MB"}
	{"level":"info","ts":"2024-09-27T00:44:24.194100Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":154592831,"revision":1524,"compact-revision":-1}
	{"level":"info","ts":"2024-09-27T00:49:24.166910Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1941}
	{"level":"info","ts":"2024-09-27T00:49:24.184634Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1941,"took":"17.18324ms","hash":2906400176,"current-db-size-bytes":6713344,"current-db-size":"6.7 MB","current-db-size-in-use-bytes":4603904,"current-db-size-in-use":"4.6 MB"}
	{"level":"info","ts":"2024-09-27T00:49:24.184679Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2906400176,"revision":1941,"compact-revision":1524}
	
	
	==> gcp-auth [f79bc824b8278bffc4be0ad3ad49df8f62945f0be7f07c2e7eba40dd9ed2637d] <==
	2024/09/27 00:37:14 Ready to write response ...
	2024/09/27 00:37:14 Ready to marshal response ...
	2024/09/27 00:37:14 Ready to write response ...
	2024/09/27 00:45:18 Ready to marshal response ...
	2024/09/27 00:45:18 Ready to write response ...
	2024/09/27 00:45:18 Ready to marshal response ...
	2024/09/27 00:45:18 Ready to write response ...
	2024/09/27 00:45:18 Ready to marshal response ...
	2024/09/27 00:45:18 Ready to write response ...
	2024/09/27 00:45:27 Ready to marshal response ...
	2024/09/27 00:45:27 Ready to write response ...
	2024/09/27 00:45:53 Ready to marshal response ...
	2024/09/27 00:45:53 Ready to write response ...
	2024/09/27 00:46:09 Ready to marshal response ...
	2024/09/27 00:46:09 Ready to write response ...
	2024/09/27 00:46:42 Ready to marshal response ...
	2024/09/27 00:46:42 Ready to write response ...
	2024/09/27 00:49:01 Ready to marshal response ...
	2024/09/27 00:49:01 Ready to write response ...
	2024/09/27 00:49:32 Ready to marshal response ...
	2024/09/27 00:49:32 Ready to write response ...
	2024/09/27 00:49:32 Ready to marshal response ...
	2024/09/27 00:49:32 Ready to write response ...
	2024/09/27 00:49:42 Ready to marshal response ...
	2024/09/27 00:49:42 Ready to write response ...
	
	
	==> kernel <==
	 00:52:22 up  4:34,  0 users,  load average: 0.07, 0.33, 0.93
	Linux addons-220192 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [d7a7261efecf3162ccc2d26ed432451c900af8b4d1487407d7ce2be5094281b5] <==
	I0927 00:50:18.620011       1 main.go:299] handling current node
	I0927 00:50:28.619100       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:50:28.619136       1 main.go:299] handling current node
	I0927 00:50:38.619356       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:50:38.619388       1 main.go:299] handling current node
	I0927 00:50:48.619501       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:50:48.619535       1 main.go:299] handling current node
	I0927 00:50:58.619989       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:50:58.620023       1 main.go:299] handling current node
	I0927 00:51:08.620112       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:51:08.620149       1 main.go:299] handling current node
	I0927 00:51:18.619899       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:51:18.619935       1 main.go:299] handling current node
	I0927 00:51:28.619807       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:51:28.619842       1 main.go:299] handling current node
	I0927 00:51:38.619395       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:51:38.619518       1 main.go:299] handling current node
	I0927 00:51:48.619998       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:51:48.620032       1 main.go:299] handling current node
	I0927 00:51:58.619362       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:51:58.619398       1 main.go:299] handling current node
	I0927 00:52:08.619280       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:52:08.619395       1 main.go:299] handling current node
	I0927 00:52:18.619887       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0927 00:52:18.620013       1 main.go:299] handling current node
	
	
	==> kube-apiserver [04b9c719c715f318e0da018097c22f147000bd0fb64d781731fa9eb3b3c51395] <==
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0927 00:36:39.626030       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.158.28:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.158.28:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.158.28:443: connect: connection refused" logger="UnhandledError"
	I0927 00:36:39.717054       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0927 00:45:18.452440       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.96.241"}
	I0927 00:46:03.777180       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0927 00:46:25.606817       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 00:46:25.606863       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0927 00:46:25.636258       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 00:46:25.636318       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0927 00:46:25.715476       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 00:46:25.715518       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0927 00:46:25.735524       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 00:46:25.735605       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0927 00:46:25.743258       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0927 00:46:25.743298       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0927 00:46:26.719243       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0927 00:46:26.744004       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0927 00:46:26.865842       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0927 00:46:36.881429       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0927 00:46:37.930581       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0927 00:46:42.463824       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0927 00:46:42.770878       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.223.22"}
	I0927 00:49:02.205306       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.7.117"}
	
	
	==> kube-controller-manager [2bfc8d78fdf58256c4a5925537af21cdbf3dbd66127f8a15b8101f92fb8a78c9] <==
	W0927 00:50:21.895092       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:50:21.895133       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:50:35.013028       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:50:35.013152       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:50:45.270286       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:50:45.270371       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:51:05.753794       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:51:05.753838       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:51:10.964207       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:51:10.964249       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:51:16.271429       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:51:16.271470       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:51:30.392355       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:51:30.392400       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:51:44.908729       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:51:44.908769       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:51:47.249164       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:51:47.249282       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:52:01.092730       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:52:01.092777       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:52:14.936977       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:52:14.937019       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:52:17.041523       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:52:17.041565       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0927 00:52:21.213973       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="4.324µs"
	
	
	==> kube-proxy [5e3fe54c99e931cc6b0b654e967a2638c30374abdabe2c1174d5f6a3fff11315] <==
	I0927 00:34:38.907788       1 server_linux.go:66] "Using iptables proxy"
	I0927 00:34:39.331001       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0927 00:34:39.331159       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 00:34:39.614187       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0927 00:34:39.614314       1 server_linux.go:169] "Using iptables Proxier"
	I0927 00:34:39.617555       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 00:34:39.625699       1 server.go:483] "Version info" version="v1.31.1"
	I0927 00:34:39.625787       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 00:34:39.645465       1 config.go:199] "Starting service config controller"
	I0927 00:34:39.650076       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 00:34:39.645886       1 config.go:105] "Starting endpoint slice config controller"
	I0927 00:34:39.650198       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 00:34:39.648423       1 config.go:328] "Starting node config controller"
	I0927 00:34:39.650407       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 00:34:39.750364       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 00:34:39.751607       1 shared_informer.go:320] Caches are synced for service config
	I0927 00:34:39.751679       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [555dc55ff545e165f45bde68c31f0843d0f21041ba3fea37def560aea920dcc5] <==
	W0927 00:34:26.563980       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0927 00:34:26.564047       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:34:26.564158       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0927 00:34:26.564555       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:34:26.564370       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0927 00:34:26.564682       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:34:26.564768       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0927 00:34:26.564871       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:34:26.564418       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0927 00:34:26.564995       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:34:26.564470       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0927 00:34:26.565087       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:34:26.564528       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0927 00:34:26.565193       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 00:34:27.425835       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0927 00:34:27.425963       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0927 00:34:27.457379       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0927 00:34:27.457505       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:34:27.578493       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0927 00:34:27.578645       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 00:34:27.626921       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0927 00:34:27.627048       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:34:27.640709       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0927 00:34:27.640830       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0927 00:34:29.347645       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 27 00:51:38 addons-220192 kubelet[1511]: E0927 00:51:38.074485    1511 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="cb2a80ac-9ca0-4ac1-8260-ec32cfb893e8"
	Sep 27 00:51:39 addons-220192 kubelet[1511]: E0927 00:51:39.435340    1511 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398299435052176,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576748,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:51:39 addons-220192 kubelet[1511]: E0927 00:51:39.435380    1511 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398299435052176,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576748,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:51:49 addons-220192 kubelet[1511]: E0927 00:51:49.074420    1511 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="cb2a80ac-9ca0-4ac1-8260-ec32cfb893e8"
	Sep 27 00:51:49 addons-220192 kubelet[1511]: E0927 00:51:49.438189    1511 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398309437936151,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576748,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:51:49 addons-220192 kubelet[1511]: E0927 00:51:49.438227    1511 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398309437936151,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576748,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:51:59 addons-220192 kubelet[1511]: E0927 00:51:59.441117    1511 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398319440893417,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576748,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:51:59 addons-220192 kubelet[1511]: E0927 00:51:59.441151    1511 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398319440893417,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576748,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:52:02 addons-220192 kubelet[1511]: E0927 00:52:02.074131    1511 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="cb2a80ac-9ca0-4ac1-8260-ec32cfb893e8"
	Sep 27 00:52:09 addons-220192 kubelet[1511]: E0927 00:52:09.443459    1511 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398329443232837,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576748,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:52:09 addons-220192 kubelet[1511]: E0927 00:52:09.443494    1511 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398329443232837,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576748,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:52:16 addons-220192 kubelet[1511]: E0927 00:52:16.074803    1511 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="cb2a80ac-9ca0-4ac1-8260-ec32cfb893e8"
	Sep 27 00:52:19 addons-220192 kubelet[1511]: E0927 00:52:19.445853    1511 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398339445608906,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576748,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:52:19 addons-220192 kubelet[1511]: E0927 00:52:19.445888    1511 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727398339445608906,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:576748,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 00:52:22 addons-220192 kubelet[1511]: I0927 00:52:22.459658    1511 scope.go:117] "RemoveContainer" containerID="880e241766c14d56056faa27bcd39b8d8c163f04f5b2ee16f9aff92a8c542fce"
	Sep 27 00:52:22 addons-220192 kubelet[1511]: I0927 00:52:22.496255    1511 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/1a96d0d6-2c40-4cd4-ba04-605e67d179f7-tmp-dir\") pod \"1a96d0d6-2c40-4cd4-ba04-605e67d179f7\" (UID: \"1a96d0d6-2c40-4cd4-ba04-605e67d179f7\") "
	Sep 27 00:52:22 addons-220192 kubelet[1511]: I0927 00:52:22.496304    1511 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms7bn\" (UniqueName: \"kubernetes.io/projected/1a96d0d6-2c40-4cd4-ba04-605e67d179f7-kube-api-access-ms7bn\") pod \"1a96d0d6-2c40-4cd4-ba04-605e67d179f7\" (UID: \"1a96d0d6-2c40-4cd4-ba04-605e67d179f7\") "
	Sep 27 00:52:22 addons-220192 kubelet[1511]: I0927 00:52:22.497132    1511 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a96d0d6-2c40-4cd4-ba04-605e67d179f7-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "1a96d0d6-2c40-4cd4-ba04-605e67d179f7" (UID: "1a96d0d6-2c40-4cd4-ba04-605e67d179f7"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 27 00:52:22 addons-220192 kubelet[1511]: I0927 00:52:22.499370    1511 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a96d0d6-2c40-4cd4-ba04-605e67d179f7-kube-api-access-ms7bn" (OuterVolumeSpecName: "kube-api-access-ms7bn") pod "1a96d0d6-2c40-4cd4-ba04-605e67d179f7" (UID: "1a96d0d6-2c40-4cd4-ba04-605e67d179f7"). InnerVolumeSpecName "kube-api-access-ms7bn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 27 00:52:22 addons-220192 kubelet[1511]: I0927 00:52:22.500654    1511 scope.go:117] "RemoveContainer" containerID="880e241766c14d56056faa27bcd39b8d8c163f04f5b2ee16f9aff92a8c542fce"
	Sep 27 00:52:22 addons-220192 kubelet[1511]: E0927 00:52:22.501109    1511 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"880e241766c14d56056faa27bcd39b8d8c163f04f5b2ee16f9aff92a8c542fce\": container with ID starting with 880e241766c14d56056faa27bcd39b8d8c163f04f5b2ee16f9aff92a8c542fce not found: ID does not exist" containerID="880e241766c14d56056faa27bcd39b8d8c163f04f5b2ee16f9aff92a8c542fce"
	Sep 27 00:52:22 addons-220192 kubelet[1511]: I0927 00:52:22.501142    1511 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"880e241766c14d56056faa27bcd39b8d8c163f04f5b2ee16f9aff92a8c542fce"} err="failed to get container status \"880e241766c14d56056faa27bcd39b8d8c163f04f5b2ee16f9aff92a8c542fce\": rpc error: code = NotFound desc = could not find container \"880e241766c14d56056faa27bcd39b8d8c163f04f5b2ee16f9aff92a8c542fce\": container with ID starting with 880e241766c14d56056faa27bcd39b8d8c163f04f5b2ee16f9aff92a8c542fce not found: ID does not exist"
	Sep 27 00:52:22 addons-220192 kubelet[1511]: I0927 00:52:22.597455    1511 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/1a96d0d6-2c40-4cd4-ba04-605e67d179f7-tmp-dir\") on node \"addons-220192\" DevicePath \"\""
	Sep 27 00:52:22 addons-220192 kubelet[1511]: I0927 00:52:22.597497    1511 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-ms7bn\" (UniqueName: \"kubernetes.io/projected/1a96d0d6-2c40-4cd4-ba04-605e67d179f7-kube-api-access-ms7bn\") on node \"addons-220192\" DevicePath \"\""
	Sep 27 00:52:23 addons-220192 kubelet[1511]: I0927 00:52:23.074672    1511 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a96d0d6-2c40-4cd4-ba04-605e67d179f7" path="/var/lib/kubelet/pods/1a96d0d6-2c40-4cd4-ba04-605e67d179f7/volumes"
	
	
	==> storage-provisioner [75b98e47380efba40cfb3e8a5003cf4e028dcd407cc6a050e8ed0e60a3c3168e] <==
	I0927 00:35:20.141906       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0927 00:35:20.155589       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0927 00:35:20.158853       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0927 00:35:20.168600       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0927 00:35:20.168906       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-220192_3340d466-8fff-465f-820a-19104d1219e9!
	I0927 00:35:20.169972       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"88317798-314a-4def-996f-d4666fa1d4d1", APIVersion:"v1", ResourceVersion:"910", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-220192_3340d466-8fff-465f-820a-19104d1219e9 became leader
	I0927 00:35:20.269123       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-220192_3340d466-8fff-465f-820a-19104d1219e9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-220192 -n addons-220192
helpers_test.go:261: (dbg) Run:  kubectl --context addons-220192 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-220192 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-220192 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-220192/192.168.49.2
	Start Time:       Fri, 27 Sep 2024 00:37:14 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.21
	IPs:
	  IP:  10.244.0.21
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lzqg5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lzqg5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  15m                default-scheduler  Successfully assigned default/busybox to addons-220192
	  Normal   Pulling    13m (x4 over 15m)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     13m (x4 over 15m)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     13m (x4 over 15m)  kubelet            Error: ErrImagePull
	  Warning  Failed     13m (x6 over 15m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    7s (x65 over 15m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (357.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (383.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-745133 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-745133 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 102 (6m19.616302805s)

                                                
                                                
-- stdout --
	* [old-k8s-version-745133] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19711-553751/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-553751/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-745133" primary control-plane node in "old-k8s-version-745133" cluster
	* Pulling base image v0.0.45-1727108449-19696 ...
	* Restarting existing docker container for "old-k8s-version-745133" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.24.6 ...
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-745133 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 01:38:30.792535  756367 out.go:345] Setting OutFile to fd 1 ...
	I0927 01:38:30.792651  756367 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:38:30.792661  756367 out.go:358] Setting ErrFile to fd 2...
	I0927 01:38:30.792666  756367 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:38:30.792921  756367 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-553751/.minikube/bin
	I0927 01:38:30.793298  756367 out.go:352] Setting JSON to false
	I0927 01:38:30.794204  756367 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":19254,"bootTime":1727381857,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0927 01:38:30.794276  756367 start.go:139] virtualization:  
	I0927 01:38:30.797721  756367 out.go:177] * [old-k8s-version-745133] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0927 01:38:30.801467  756367 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 01:38:30.801626  756367 notify.go:220] Checking for updates...
	I0927 01:38:30.807240  756367 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 01:38:30.810187  756367 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-553751/kubeconfig
	I0927 01:38:30.812711  756367 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-553751/.minikube
	I0927 01:38:30.815379  756367 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0927 01:38:30.818124  756367 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 01:38:30.821074  756367 config.go:182] Loaded profile config "old-k8s-version-745133": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0927 01:38:30.823755  756367 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0927 01:38:30.826362  756367 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 01:38:30.873077  756367 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0927 01:38:30.873187  756367 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 01:38:30.952234  756367 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:67 SystemTime:2024-09-27 01:38:30.937438012 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 01:38:30.952345  756367 docker.go:318] overlay module found
	I0927 01:38:30.955416  756367 out.go:177] * Using the docker driver based on existing profile
	I0927 01:38:30.958323  756367 start.go:297] selected driver: docker
	I0927 01:38:30.958346  756367 start.go:901] validating driver "docker" against &{Name:old-k8s-version-745133 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-745133 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStr
ing:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:38:30.958458  756367 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 01:38:30.959163  756367 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 01:38:31.048042  756367 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:67 SystemTime:2024-09-27 01:38:31.037698358 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 01:38:31.048432  756367 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 01:38:31.048454  756367 cni.go:84] Creating CNI manager for ""
	I0927 01:38:31.048497  756367 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0927 01:38:31.048531  756367 start.go:340] cluster config:
	{Name:old-k8s-version-745133 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-745133 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:38:31.051279  756367 out.go:177] * Starting "old-k8s-version-745133" primary control-plane node in "old-k8s-version-745133" cluster
	I0927 01:38:31.053760  756367 cache.go:121] Beginning downloading kic base image for docker with crio
	I0927 01:38:31.056718  756367 out.go:177] * Pulling base image v0.0.45-1727108449-19696 ...
	I0927 01:38:31.060437  756367 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0927 01:38:31.060502  756367 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19711-553751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0927 01:38:31.060511  756367 cache.go:56] Caching tarball of preloaded images
	I0927 01:38:31.060599  756367 preload.go:172] Found /home/jenkins/minikube-integration/19711-553751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0927 01:38:31.060609  756367 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0927 01:38:31.060715  756367 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/old-k8s-version-745133/config.json ...
	I0927 01:38:31.060928  756367 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0927 01:38:31.095476  756367 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon, skipping pull
	I0927 01:38:31.095496  756367 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 exists in daemon, skipping load
	I0927 01:38:31.095511  756367 cache.go:194] Successfully downloaded all kic artifacts
	I0927 01:38:31.095534  756367 start.go:360] acquireMachinesLock for old-k8s-version-745133: {Name:mk1dc62b74a6b6f0f0f4c7d69d36594586373035 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 01:38:31.095589  756367 start.go:364] duration metric: took 34.026µs to acquireMachinesLock for "old-k8s-version-745133"
	I0927 01:38:31.095611  756367 start.go:96] Skipping create...Using existing machine configuration
	I0927 01:38:31.095623  756367 fix.go:54] fixHost starting: 
	I0927 01:38:31.095884  756367 cli_runner.go:164] Run: docker container inspect old-k8s-version-745133 --format={{.State.Status}}
	I0927 01:38:31.112123  756367 fix.go:112] recreateIfNeeded on old-k8s-version-745133: state=Stopped err=<nil>
	W0927 01:38:31.112154  756367 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 01:38:31.114995  756367 out.go:177] * Restarting existing docker container for "old-k8s-version-745133" ...
	I0927 01:38:31.117633  756367 cli_runner.go:164] Run: docker start old-k8s-version-745133
	I0927 01:38:31.477261  756367 cli_runner.go:164] Run: docker container inspect old-k8s-version-745133 --format={{.State.Status}}
	I0927 01:38:31.513306  756367 kic.go:430] container "old-k8s-version-745133" state is running.
	I0927 01:38:31.513730  756367 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-745133
	I0927 01:38:31.541314  756367 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/old-k8s-version-745133/config.json ...
	I0927 01:38:31.541536  756367 machine.go:93] provisionDockerMachine start ...
	I0927 01:38:31.541596  756367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-745133
	I0927 01:38:31.568274  756367 main.go:141] libmachine: Using SSH client type: native
	I0927 01:38:31.568542  756367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33791 <nil> <nil>}
	I0927 01:38:31.568552  756367 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 01:38:31.569210  756367 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56238->127.0.0.1:33791: read: connection reset by peer
	I0927 01:38:34.738421  756367 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-745133
	
	I0927 01:38:34.738443  756367 ubuntu.go:169] provisioning hostname "old-k8s-version-745133"
	I0927 01:38:34.738511  756367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-745133
	I0927 01:38:34.796845  756367 main.go:141] libmachine: Using SSH client type: native
	I0927 01:38:34.797095  756367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33791 <nil> <nil>}
	I0927 01:38:34.797107  756367 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-745133 && echo "old-k8s-version-745133" | sudo tee /etc/hostname
	I0927 01:38:34.956689  756367 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-745133
	
	I0927 01:38:34.956771  756367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-745133
	I0927 01:38:34.988239  756367 main.go:141] libmachine: Using SSH client type: native
	I0927 01:38:34.988564  756367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33791 <nil> <nil>}
	I0927 01:38:34.988587  756367 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-745133' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-745133/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-745133' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 01:38:35.129961  756367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 01:38:35.130105  756367 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19711-553751/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-553751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-553751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-553751/.minikube}
	I0927 01:38:35.130234  756367 ubuntu.go:177] setting up certificates
	I0927 01:38:35.130252  756367 provision.go:84] configureAuth start
	I0927 01:38:35.130401  756367 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-745133
	I0927 01:38:35.154293  756367 provision.go:143] copyHostCerts
	I0927 01:38:35.154369  756367 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-553751/.minikube/key.pem, removing ...
	I0927 01:38:35.154390  756367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-553751/.minikube/key.pem
	I0927 01:38:35.154467  756367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-553751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-553751/.minikube/key.pem (1675 bytes)
	I0927 01:38:35.154580  756367 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-553751/.minikube/ca.pem, removing ...
	I0927 01:38:35.154592  756367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-553751/.minikube/ca.pem
	I0927 01:38:35.154621  756367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-553751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-553751/.minikube/ca.pem (1078 bytes)
	I0927 01:38:35.154693  756367 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-553751/.minikube/cert.pem, removing ...
	I0927 01:38:35.154705  756367 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-553751/.minikube/cert.pem
	I0927 01:38:35.154842  756367 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-553751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-553751/.minikube/cert.pem (1123 bytes)
	I0927 01:38:35.154921  756367 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-553751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-553751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-553751/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-745133 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-745133]
	I0927 01:38:35.653504  756367 provision.go:177] copyRemoteCerts
	I0927 01:38:35.653623  756367 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 01:38:35.653707  756367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-745133
	I0927 01:38:35.669754  756367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33791 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/old-k8s-version-745133/id_rsa Username:docker}
	I0927 01:38:35.763854  756367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 01:38:35.788100  756367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0927 01:38:35.824120  756367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 01:38:35.871229  756367 provision.go:87] duration metric: took 740.96195ms to configureAuth
	I0927 01:38:35.871260  756367 ubuntu.go:193] setting minikube options for container-runtime
	I0927 01:38:35.871462  756367 config.go:182] Loaded profile config "old-k8s-version-745133": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0927 01:38:35.871576  756367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-745133
	I0927 01:38:35.891631  756367 main.go:141] libmachine: Using SSH client type: native
	I0927 01:38:35.891894  756367 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33791 <nil> <nil>}
	I0927 01:38:35.891910  756367 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 01:38:36.255956  756367 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 01:38:36.255982  756367 machine.go:96] duration metric: took 4.714436747s to provisionDockerMachine
	I0927 01:38:36.255995  756367 start.go:293] postStartSetup for "old-k8s-version-745133" (driver="docker")
	I0927 01:38:36.256007  756367 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 01:38:36.256075  756367 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 01:38:36.256126  756367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-745133
	I0927 01:38:36.289392  756367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33791 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/old-k8s-version-745133/id_rsa Username:docker}
	I0927 01:38:36.384630  756367 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 01:38:36.388455  756367 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0927 01:38:36.388491  756367 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0927 01:38:36.388501  756367 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0927 01:38:36.388508  756367 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0927 01:38:36.388519  756367 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-553751/.minikube/addons for local assets ...
	I0927 01:38:36.388572  756367 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-553751/.minikube/files for local assets ...
	I0927 01:38:36.388649  756367 filesync.go:149] local asset: /home/jenkins/minikube-integration/19711-553751/.minikube/files/etc/ssl/certs/5591582.pem -> 5591582.pem in /etc/ssl/certs
	I0927 01:38:36.388751  756367 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 01:38:36.398056  756367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/files/etc/ssl/certs/5591582.pem --> /etc/ssl/certs/5591582.pem (1708 bytes)
	I0927 01:38:36.422064  756367 start.go:296] duration metric: took 166.052315ms for postStartSetup
	I0927 01:38:36.422196  756367 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 01:38:36.422266  756367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-745133
	I0927 01:38:36.440356  756367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33791 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/old-k8s-version-745133/id_rsa Username:docker}
	I0927 01:38:36.533176  756367 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0927 01:38:36.538143  756367 fix.go:56] duration metric: took 5.442518132s for fixHost
	I0927 01:38:36.538167  756367 start.go:83] releasing machines lock for "old-k8s-version-745133", held for 5.442569692s
	I0927 01:38:36.538236  756367 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-745133
	I0927 01:38:36.565703  756367 ssh_runner.go:195] Run: cat /version.json
	I0927 01:38:36.565759  756367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-745133
	I0927 01:38:36.565773  756367 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 01:38:36.565847  756367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-745133
	I0927 01:38:36.601640  756367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33791 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/old-k8s-version-745133/id_rsa Username:docker}
	I0927 01:38:36.602402  756367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33791 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/old-k8s-version-745133/id_rsa Username:docker}
	I0927 01:38:36.702529  756367 ssh_runner.go:195] Run: systemctl --version
	I0927 01:38:36.837008  756367 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 01:38:36.997804  756367 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0927 01:38:37.002822  756367 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 01:38:37.012804  756367 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0927 01:38:37.012888  756367 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 01:38:37.024476  756367 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0927 01:38:37.024504  756367 start.go:495] detecting cgroup driver to use...
	I0927 01:38:37.024547  756367 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0927 01:38:37.024606  756367 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 01:38:37.040293  756367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 01:38:37.056243  756367 docker.go:217] disabling cri-docker service (if available) ...
	I0927 01:38:37.056324  756367 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 01:38:37.070937  756367 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 01:38:37.082845  756367 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 01:38:37.190782  756367 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 01:38:37.291082  756367 docker.go:233] disabling docker service ...
	I0927 01:38:37.291162  756367 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 01:38:37.306050  756367 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 01:38:37.327846  756367 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 01:38:37.456667  756367 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 01:38:37.558394  756367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 01:38:37.572230  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 01:38:37.589299  756367 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0927 01:38:37.589368  756367 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:38:37.599652  756367 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 01:38:37.599723  756367 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:38:37.609961  756367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:38:37.620123  756367 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:38:37.630094  756367 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 01:38:37.639923  756367 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 01:38:37.649130  756367 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 01:38:37.658260  756367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:38:37.760713  756367 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 01:38:38.425454  756367 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 01:38:38.425560  756367 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 01:38:38.430395  756367 start.go:563] Will wait 60s for crictl version
	I0927 01:38:38.430503  756367 ssh_runner.go:195] Run: which crictl
	I0927 01:38:38.434135  756367 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 01:38:38.478583  756367 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0927 01:38:38.478704  756367 ssh_runner.go:195] Run: crio --version
	I0927 01:38:38.525278  756367 ssh_runner.go:195] Run: crio --version
	I0927 01:38:38.575705  756367 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.24.6 ...
	I0927 01:38:38.576873  756367 cli_runner.go:164] Run: docker network inspect old-k8s-version-745133 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0927 01:38:38.595107  756367 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0927 01:38:38.598870  756367 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:38:38.609698  756367 kubeadm.go:883] updating cluster {Name:old-k8s-version-745133 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-745133 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins
:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 01:38:38.609826  756367 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0927 01:38:38.609889  756367 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:38:38.675695  756367 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 01:38:38.675723  756367 crio.go:433] Images already preloaded, skipping extraction
	I0927 01:38:38.675777  756367 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:38:38.722855  756367 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 01:38:38.722882  756367 cache_images.go:84] Images are preloaded, skipping loading
	I0927 01:38:38.722891  756367 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 crio true true} ...
	I0927 01:38:38.723002  756367 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=old-k8s-version-745133 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-745133 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 01:38:38.723089  756367 ssh_runner.go:195] Run: crio config
	I0927 01:38:38.808846  756367 cni.go:84] Creating CNI manager for ""
	I0927 01:38:38.808923  756367 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0927 01:38:38.808953  756367 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 01:38:38.808995  756367 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-745133 NodeName:old-k8s-version-745133 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0927 01:38:38.809159  756367 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-745133"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 01:38:38.809248  756367 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0927 01:38:38.820790  756367 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 01:38:38.820907  756367 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 01:38:38.829540  756367 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (480 bytes)
	I0927 01:38:38.847476  756367 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 01:38:38.876557  756367 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0927 01:38:38.902384  756367 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0927 01:38:38.906213  756367 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:38:38.917552  756367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:38:39.021462  756367 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:38:39.037690  756367 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/old-k8s-version-745133 for IP: 192.168.76.2
	I0927 01:38:39.037729  756367 certs.go:194] generating shared ca certs ...
	I0927 01:38:39.037746  756367 certs.go:226] acquiring lock for ca certs: {Name:mkd73b356b28d0818fea73c44481b0cb2597afbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:38:39.037944  756367 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-553751/.minikube/ca.key
	I0927 01:38:39.038030  756367 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-553751/.minikube/proxy-client-ca.key
	I0927 01:38:39.038046  756367 certs.go:256] generating profile certs ...
	I0927 01:38:39.038177  756367 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/old-k8s-version-745133/client.key
	I0927 01:38:39.038286  756367 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/old-k8s-version-745133/apiserver.key.583406db
	I0927 01:38:39.038360  756367 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/old-k8s-version-745133/proxy-client.key
	I0927 01:38:39.038510  756367 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-553751/.minikube/certs/559158.pem (1338 bytes)
	W0927 01:38:39.038567  756367 certs.go:480] ignoring /home/jenkins/minikube-integration/19711-553751/.minikube/certs/559158_empty.pem, impossibly tiny 0 bytes
	I0927 01:38:39.038581  756367 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-553751/.minikube/certs/ca-key.pem (1679 bytes)
	I0927 01:38:39.038621  756367 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-553751/.minikube/certs/ca.pem (1078 bytes)
	I0927 01:38:39.038666  756367 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-553751/.minikube/certs/cert.pem (1123 bytes)
	I0927 01:38:39.038705  756367 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-553751/.minikube/certs/key.pem (1675 bytes)
	I0927 01:38:39.038802  756367 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-553751/.minikube/files/etc/ssl/certs/5591582.pem (1708 bytes)
	I0927 01:38:39.039569  756367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 01:38:39.095191  756367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0927 01:38:39.128291  756367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 01:38:39.210007  756367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0927 01:38:39.257289  756367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/old-k8s-version-745133/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0927 01:38:39.283748  756367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/old-k8s-version-745133/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0927 01:38:39.310189  756367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/old-k8s-version-745133/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 01:38:39.337730  756367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/old-k8s-version-745133/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0927 01:38:39.363604  756367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 01:38:39.390105  756367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/certs/559158.pem --> /usr/share/ca-certificates/559158.pem (1338 bytes)
	I0927 01:38:39.416466  756367 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/files/etc/ssl/certs/5591582.pem --> /usr/share/ca-certificates/5591582.pem (1708 bytes)
	I0927 01:38:39.442404  756367 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 01:38:39.461566  756367 ssh_runner.go:195] Run: openssl version
	I0927 01:38:39.467707  756367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/559158.pem && ln -fs /usr/share/ca-certificates/559158.pem /etc/ssl/certs/559158.pem"
	I0927 01:38:39.478048  756367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/559158.pem
	I0927 01:38:39.482154  756367 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:53 /usr/share/ca-certificates/559158.pem
	I0927 01:38:39.482250  756367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/559158.pem
	I0927 01:38:39.489813  756367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/559158.pem /etc/ssl/certs/51391683.0"
	I0927 01:38:39.500033  756367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5591582.pem && ln -fs /usr/share/ca-certificates/5591582.pem /etc/ssl/certs/5591582.pem"
	I0927 01:38:39.510090  756367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5591582.pem
	I0927 01:38:39.514132  756367 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:53 /usr/share/ca-certificates/5591582.pem
	I0927 01:38:39.514228  756367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5591582.pem
	I0927 01:38:39.521610  756367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5591582.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 01:38:39.532164  756367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 01:38:39.546657  756367 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:38:39.550990  756367 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:34 /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:38:39.551107  756367 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:38:39.558286  756367 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 01:38:39.568014  756367 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 01:38:39.572326  756367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0927 01:38:39.579767  756367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0927 01:38:39.587174  756367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0927 01:38:39.594384  756367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0927 01:38:39.601799  756367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0927 01:38:39.609144  756367 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0927 01:38:39.618881  756367 kubeadm.go:392] StartCluster: {Name:old-k8s-version-745133 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-745133 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:38:39.619000  756367 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 01:38:39.619073  756367 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:38:39.689800  756367 cri.go:89] found id: ""
	I0927 01:38:39.689878  756367 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 01:38:39.699548  756367 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0927 01:38:39.699570  756367 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0927 01:38:39.699635  756367 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0927 01:38:39.708367  756367 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0927 01:38:39.708855  756367 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-745133" does not appear in /home/jenkins/minikube-integration/19711-553751/kubeconfig
	I0927 01:38:39.708975  756367 kubeconfig.go:62] /home/jenkins/minikube-integration/19711-553751/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-745133" cluster setting kubeconfig missing "old-k8s-version-745133" context setting]
	I0927 01:38:39.709281  756367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-553751/kubeconfig: {Name:mkc30ade55bf966f83b95c0af3746bfadfd3f379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:38:39.711093  756367 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0927 01:38:39.720052  756367 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I0927 01:38:39.720087  756367 kubeadm.go:597] duration metric: took 20.510233ms to restartPrimaryControlPlane
	I0927 01:38:39.720117  756367 kubeadm.go:394] duration metric: took 101.251649ms to StartCluster
	I0927 01:38:39.720133  756367 settings.go:142] acquiring lock: {Name:mk5b1f005001018637d448709269193603885722 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:38:39.720196  756367 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19711-553751/kubeconfig
	I0927 01:38:39.720882  756367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-553751/kubeconfig: {Name:mkc30ade55bf966f83b95c0af3746bfadfd3f379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:38:39.721102  756367 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 01:38:39.721508  756367 config.go:182] Loaded profile config "old-k8s-version-745133": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0927 01:38:39.721508  756367 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0927 01:38:39.721628  756367 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-745133"
	I0927 01:38:39.721647  756367 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-745133"
	W0927 01:38:39.721654  756367 addons.go:243] addon storage-provisioner should already be in state true
	I0927 01:38:39.721684  756367 host.go:66] Checking if "old-k8s-version-745133" exists ...
	I0927 01:38:39.721702  756367 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-745133"
	I0927 01:38:39.721720  756367 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-745133"
	I0927 01:38:39.722045  756367 cli_runner.go:164] Run: docker container inspect old-k8s-version-745133 --format={{.State.Status}}
	I0927 01:38:39.722158  756367 cli_runner.go:164] Run: docker container inspect old-k8s-version-745133 --format={{.State.Status}}
	I0927 01:38:39.722666  756367 addons.go:69] Setting dashboard=true in profile "old-k8s-version-745133"
	I0927 01:38:39.722691  756367 addons.go:234] Setting addon dashboard=true in "old-k8s-version-745133"
	W0927 01:38:39.722700  756367 addons.go:243] addon dashboard should already be in state true
	I0927 01:38:39.722733  756367 host.go:66] Checking if "old-k8s-version-745133" exists ...
	I0927 01:38:39.723262  756367 cli_runner.go:164] Run: docker container inspect old-k8s-version-745133 --format={{.State.Status}}
	I0927 01:38:39.726129  756367 out.go:177] * Verifying Kubernetes components...
	I0927 01:38:39.726408  756367 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-745133"
	I0927 01:38:39.726437  756367 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-745133"
	W0927 01:38:39.726445  756367 addons.go:243] addon metrics-server should already be in state true
	I0927 01:38:39.726475  756367 host.go:66] Checking if "old-k8s-version-745133" exists ...
	I0927 01:38:39.727887  756367 cli_runner.go:164] Run: docker container inspect old-k8s-version-745133 --format={{.State.Status}}
	I0927 01:38:39.730898  756367 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:38:39.790349  756367 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-745133"
	W0927 01:38:39.790372  756367 addons.go:243] addon default-storageclass should already be in state true
	I0927 01:38:39.790398  756367 host.go:66] Checking if "old-k8s-version-745133" exists ...
	I0927 01:38:39.794258  756367 cli_runner.go:164] Run: docker container inspect old-k8s-version-745133 --format={{.State.Status}}
	I0927 01:38:39.804273  756367 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0927 01:38:39.804355  756367 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:38:39.805439  756367 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0927 01:38:39.805637  756367 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:38:39.805651  756367 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 01:38:39.805711  756367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-745133
	I0927 01:38:39.806923  756367 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0927 01:38:39.806943  756367 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0927 01:38:39.807004  756367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-745133
	I0927 01:38:39.818549  756367 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0927 01:38:39.823700  756367 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0927 01:38:39.823728  756367 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0927 01:38:39.823809  756367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-745133
	I0927 01:38:39.840518  756367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33791 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/old-k8s-version-745133/id_rsa Username:docker}
	I0927 01:38:39.866377  756367 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 01:38:39.866401  756367 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 01:38:39.866493  756367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-745133
	I0927 01:38:39.880465  756367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33791 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/old-k8s-version-745133/id_rsa Username:docker}
	I0927 01:38:39.886412  756367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33791 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/old-k8s-version-745133/id_rsa Username:docker}
	I0927 01:38:39.922810  756367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33791 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/old-k8s-version-745133/id_rsa Username:docker}
	I0927 01:38:39.985055  756367 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:38:40.003210  756367 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-745133" to be "Ready" ...
	I0927 01:38:40.057844  756367 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0927 01:38:40.057867  756367 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0927 01:38:40.089867  756367 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0927 01:38:40.089890  756367 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0927 01:38:40.107223  756367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:38:40.126433  756367 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0927 01:38:40.126513  756367 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0927 01:38:40.144487  756367 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 01:38:40.144563  756367 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0927 01:38:40.161822  756367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 01:38:40.225678  756367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 01:38:40.243230  756367 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0927 01:38:40.243307  756367 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0927 01:38:40.390524  756367 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0927 01:38:40.390599  756367 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	W0927 01:38:40.427455  756367 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:40.427604  756367 retry.go:31] will retry after 212.02116ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0927 01:38:40.483903  756367 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:40.483981  756367 retry.go:31] will retry after 253.279228ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:40.488592  756367 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0927 01:38:40.488668  756367 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0927 01:38:40.535237  756367 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:40.535319  756367 retry.go:31] will retry after 332.013907ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:40.548411  756367 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0927 01:38:40.548490  756367 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0927 01:38:40.566752  756367 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0927 01:38:40.566779  756367 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0927 01:38:40.588997  756367 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0927 01:38:40.589020  756367 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0927 01:38:40.608532  756367 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0927 01:38:40.608557  756367 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0927 01:38:40.625938  756367 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0927 01:38:40.625964  756367 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0927 01:38:40.640081  756367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:38:40.649208  756367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0927 01:38:40.738401  756367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0927 01:38:40.829360  756367 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:40.829393  756367 retry.go:31] will retry after 235.98769ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0927 01:38:40.862769  756367 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:40.862803  756367 retry.go:31] will retry after 198.388278ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:40.868106  756367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0927 01:38:40.956412  756367 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:40.956447  756367 retry.go:31] will retry after 271.48785ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0927 01:38:41.022512  756367 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:41.022544  756367 retry.go:31] will retry after 324.210146ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:41.061843  756367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0927 01:38:41.066141  756367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:38:41.228450  756367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0927 01:38:41.275769  756367 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:41.275804  756367 retry.go:31] will retry after 357.305087ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0927 01:38:41.320398  756367 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:41.320433  756367 retry.go:31] will retry after 403.721821ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:41.347460  756367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0927 01:38:41.351011  756367 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:41.351039  756367 retry.go:31] will retry after 769.072146ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0927 01:38:41.449718  756367 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:41.449753  756367 retry.go:31] will retry after 699.483002ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:41.633714  756367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0927 01:38:41.712477  756367 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:41.712561  756367 retry.go:31] will retry after 458.98365ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:41.724664  756367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0927 01:38:41.826725  756367 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:41.826808  756367 retry.go:31] will retry after 778.663686ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:42.004412  756367 node_ready.go:53] error getting node "old-k8s-version-745133": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-745133": dial tcp 192.168.76.2:8443: connect: connection refused
	I0927 01:38:42.120829  756367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0927 01:38:42.149549  756367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 01:38:42.172420  756367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0927 01:38:42.348127  756367 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:42.348191  756367 retry.go:31] will retry after 829.830672ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0927 01:38:42.411576  756367 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:42.411634  756367 retry.go:31] will retry after 1.239024863s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0927 01:38:42.440968  756367 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:42.441013  756367 retry.go:31] will retry after 551.484749ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:42.605871  756367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0927 01:38:42.705900  756367 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:42.705935  756367 retry.go:31] will retry after 1.03805135s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:42.993542  756367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0927 01:38:43.098373  756367 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:43.098417  756367 retry.go:31] will retry after 991.478041ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:43.178768  756367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0927 01:38:43.277045  756367 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:43.277130  756367 retry.go:31] will retry after 677.52818ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:43.651253  756367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0927 01:38:43.742670  756367 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:43.742709  756367 retry.go:31] will retry after 1.714090062s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:43.744979  756367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0927 01:38:43.846334  756367 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:43.846366  756367 retry.go:31] will retry after 2.639690146s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:43.955677  756367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0927 01:38:44.046124  756367 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:44.046210  756367 retry.go:31] will retry after 2.129902463s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:44.090450  756367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0927 01:38:44.190771  756367 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:44.190852  756367 retry.go:31] will retry after 1.912602745s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:44.504822  756367 node_ready.go:53] error getting node "old-k8s-version-745133": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-745133": dial tcp 192.168.76.2:8443: connect: connection refused
	I0927 01:38:45.457714  756367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0927 01:38:45.573824  756367 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:45.573860  756367 retry.go:31] will retry after 2.755861707s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:46.104536  756367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0927 01:38:46.177128  756367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0927 01:38:46.226609  756367 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:46.226640  756367 retry.go:31] will retry after 2.664547288s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0927 01:38:46.333171  756367 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:46.333209  756367 retry.go:31] will retry after 3.34269917s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:46.486597  756367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0927 01:38:46.586568  756367 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:46.586598  756367 retry.go:31] will retry after 3.343825659s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:47.004486  756367 node_ready.go:53] error getting node "old-k8s-version-745133": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-745133": dial tcp 192.168.76.2:8443: connect: connection refused
	I0927 01:38:48.330549  756367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0927 01:38:48.535048  756367 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:48.535083  756367 retry.go:31] will retry after 4.072773211s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0927 01:38:48.891435  756367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0927 01:38:49.676739  756367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0927 01:38:49.931312  756367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:38:52.608682  756367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 01:38:59.004610  756367 node_ready.go:53] error getting node "old-k8s-version-745133": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-745133": net/http: TLS handshake timeout
	I0927 01:38:59.068286  756367 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.176800119s)
	W0927 01:38:59.068341  756367 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	I0927 01:38:59.068360  756367 retry.go:31] will retry after 3.419371875s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	I0927 01:38:59.911914  756367 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (10.235130517s)
	W0927 01:38:59.911949  756367 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	I0927 01:38:59.911967  756367 retry.go:31] will retry after 3.332826058s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	I0927 01:39:00.337705  756367 node_ready.go:49] node "old-k8s-version-745133" has status "Ready":"True"
	I0927 01:39:00.337734  756367 node_ready.go:38] duration metric: took 20.334478894s for node "old-k8s-version-745133" to be "Ready" ...
	I0927 01:39:00.337746  756367 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:39:00.465149  756367 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-drjjb" in "kube-system" namespace to be "Ready" ...
	I0927 01:39:00.740815  756367 pod_ready.go:93] pod "coredns-74ff55c5b-drjjb" in "kube-system" namespace has status "Ready":"True"
	I0927 01:39:00.740841  756367 pod_ready.go:82] duration metric: took 269.275814ms for pod "coredns-74ff55c5b-drjjb" in "kube-system" namespace to be "Ready" ...
	I0927 01:39:00.740853  756367 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-745133" in "kube-system" namespace to be "Ready" ...
	I0927 01:39:00.892454  756367 pod_ready.go:93] pod "etcd-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"True"
	I0927 01:39:00.892525  756367 pod_ready.go:82] duration metric: took 151.66317ms for pod "etcd-old-k8s-version-745133" in "kube-system" namespace to be "Ready" ...
	I0927 01:39:00.892555  756367 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-745133" in "kube-system" namespace to be "Ready" ...
	I0927 01:39:00.981964  756367 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"True"
	I0927 01:39:00.982037  756367 pod_ready.go:82] duration metric: took 89.459529ms for pod "kube-apiserver-old-k8s-version-745133" in "kube-system" namespace to be "Ready" ...
	I0927 01:39:00.982080  756367 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-745133" in "kube-system" namespace to be "Ready" ...
	I0927 01:39:01.042886  756367 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"True"
	I0927 01:39:01.042962  756367 pod_ready.go:82] duration metric: took 60.854904ms for pod "kube-controller-manager-old-k8s-version-745133" in "kube-system" namespace to be "Ready" ...
	I0927 01:39:01.043008  756367 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tvwdl" in "kube-system" namespace to be "Ready" ...
	I0927 01:39:01.050840  756367 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.119480907s)
	I0927 01:39:01.114774  756367 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.506040947s)
	I0927 01:39:01.114812  756367 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-745133"
	I0927 01:39:02.487944  756367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0927 01:39:03.080642  756367 pod_ready.go:103] pod "kube-proxy-tvwdl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:39:03.245473  756367 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0927 01:39:03.437236  756367 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-745133 addons enable metrics-server
	
	I0927 01:39:03.588531  756367 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0927 01:39:03.590023  756367 addons.go:510] duration metric: took 23.868514437s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0927 01:39:05.549293  756367 pod_ready.go:103] pod "kube-proxy-tvwdl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:39:07.049982  756367 pod_ready.go:93] pod "kube-proxy-tvwdl" in "kube-system" namespace has status "Ready":"True"
	I0927 01:39:07.050053  756367 pod_ready.go:82] duration metric: took 6.007021862s for pod "kube-proxy-tvwdl" in "kube-system" namespace to be "Ready" ...
	I0927 01:39:07.050081  756367 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace to be "Ready" ...
	I0927 01:39:09.059247  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:39:11.556193  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:39:13.556461  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:39:16.055723  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:39:18.057179  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:39:20.556931  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:39:22.562282  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:39:24.594163  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:39:27.063292  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:39:29.557030  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:39:32.056756  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:39:34.057336  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:39:36.058518  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:39:38.566951  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:39:41.056553  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:39:43.556559  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:39:46.056610  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:39:48.060648  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:39:50.067560  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:39:52.556434  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:39:55.057924  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:39:57.059910  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:39:59.557115  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:01.557789  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:03.559923  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:06.056636  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:08.057346  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:10.556242  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:12.556352  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:14.556505  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:16.558035  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:19.057804  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:21.556571  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:23.556806  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:26.056770  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:27.057249  756367 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"True"
	I0927 01:40:27.057275  756367 pod_ready.go:82] duration metric: took 1m20.007171419s for pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace to be "Ready" ...
	I0927 01:40:27.057288  756367 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace to be "Ready" ...
	I0927 01:40:29.065733  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:31.562875  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:33.563455  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:36.066295  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:38.563041  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:40.563941  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:43.064533  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:45.064689  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:47.563788  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:50.064619  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:52.563513  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:55.063431  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:57.566019  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:00.066276  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:02.563641  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:05.064215  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:07.065805  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:09.567334  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:12.063975  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:14.565667  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:17.063897  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:19.563290  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:22.064314  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:24.563394  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:27.062663  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:29.063595  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:31.562923  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:33.562977  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:36.063772  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:38.564170  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:41.063198  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:43.063908  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:45.065383  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:47.562925  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:49.564236  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:52.063522  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:54.064108  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:56.563250  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:58.564317  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:01.063874  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:03.562812  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:05.563181  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:07.563458  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:09.568052  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:12.064418  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:14.563145  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:17.064115  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:19.562933  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:21.563168  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:24.063568  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:26.064593  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:28.562897  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:30.562960  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:32.564141  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:35.063810  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:37.563916  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:40.063850  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:42.562665  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:44.563377  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:46.563810  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:49.064093  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:51.563284  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:53.563544  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:56.063585  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:58.068026  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:00.088088  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:02.124730  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:04.563544  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:06.564100  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:09.064077  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:11.064151  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:13.563466  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:16.063615  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:18.066965  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:20.069711  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:22.563100  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:24.563528  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:27.063842  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:29.064098  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:31.064496  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:33.564366  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:36.063123  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:38.063760  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:40.063959  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:42.564001  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:44.564177  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:47.063513  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:49.063777  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:51.064340  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:53.064600  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:55.564332  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:58.064170  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:00.065226  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:02.563727  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:04.564509  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:07.064530  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:09.563994  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:12.063999  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:14.563247  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:16.589478  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:19.064551  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:21.563101  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:23.563795  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:25.564112  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:27.057365  756367 pod_ready.go:82] duration metric: took 4m0.000050103s for pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace to be "Ready" ...
	E0927 01:44:27.057440  756367 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0927 01:44:27.057470  756367 pod_ready.go:39] duration metric: took 5m26.719711701s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:44:27.057518  756367 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:44:27.057586  756367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:27.057674  756367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:27.114452  756367 cri.go:89] found id: "728631fe1253bab9992c7a58f88fca5a34491a3d06d1a6601e0e70566e7d10f4"
	I0927 01:44:27.114473  756367 cri.go:89] found id: ""
	I0927 01:44:27.114482  756367 logs.go:276] 1 containers: [728631fe1253bab9992c7a58f88fca5a34491a3d06d1a6601e0e70566e7d10f4]
	I0927 01:44:27.114537  756367 ssh_runner.go:195] Run: which crictl
	I0927 01:44:27.120741  756367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:27.120814  756367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:27.166103  756367 cri.go:89] found id: "fbfae056e26dd1b15e6b109b732296bcb89c5db7fabc0f8958574a0fc1248e81"
	I0927 01:44:27.166124  756367 cri.go:89] found id: ""
	I0927 01:44:27.166132  756367 logs.go:276] 1 containers: [fbfae056e26dd1b15e6b109b732296bcb89c5db7fabc0f8958574a0fc1248e81]
	I0927 01:44:27.166190  756367 ssh_runner.go:195] Run: which crictl
	I0927 01:44:27.172322  756367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:27.172395  756367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:27.223120  756367 cri.go:89] found id: "6a28aab70700044c53668d4eceed46b67fba50e2d134c97b4c5cdd9f83c81e4a"
	I0927 01:44:27.223140  756367 cri.go:89] found id: ""
	I0927 01:44:27.223148  756367 logs.go:276] 1 containers: [6a28aab70700044c53668d4eceed46b67fba50e2d134c97b4c5cdd9f83c81e4a]
	I0927 01:44:27.223201  756367 ssh_runner.go:195] Run: which crictl
	I0927 01:44:27.227267  756367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:27.227386  756367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:27.286086  756367 cri.go:89] found id: "1399d95796260c7e27b7a91a70576c5a3e3bfcfee9fb91839ddfe1b01c5114c0"
	I0927 01:44:27.286147  756367 cri.go:89] found id: ""
	I0927 01:44:27.286169  756367 logs.go:276] 1 containers: [1399d95796260c7e27b7a91a70576c5a3e3bfcfee9fb91839ddfe1b01c5114c0]
	I0927 01:44:27.286259  756367 ssh_runner.go:195] Run: which crictl
	I0927 01:44:27.290363  756367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:27.290479  756367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:27.347335  756367 cri.go:89] found id: "1f91c6f77f281dbe05880609e73acc353cb7ee468afa455c7d22c45e9428661e"
	I0927 01:44:27.347409  756367 cri.go:89] found id: ""
	I0927 01:44:27.347440  756367 logs.go:276] 1 containers: [1f91c6f77f281dbe05880609e73acc353cb7ee468afa455c7d22c45e9428661e]
	I0927 01:44:27.347530  756367 ssh_runner.go:195] Run: which crictl
	I0927 01:44:27.380567  756367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:27.380651  756367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:27.427366  756367 cri.go:89] found id: "fd21817034d06ff8ec30b1fa5089ddfd190af71c8dfdcde0d32fd61181caafaa"
	I0927 01:44:27.427387  756367 cri.go:89] found id: ""
	I0927 01:44:27.427395  756367 logs.go:276] 1 containers: [fd21817034d06ff8ec30b1fa5089ddfd190af71c8dfdcde0d32fd61181caafaa]
	I0927 01:44:27.427452  756367 ssh_runner.go:195] Run: which crictl
	I0927 01:44:27.433497  756367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:27.433617  756367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:27.485055  756367 cri.go:89] found id: "f74f5a74223b633c3ca99ac7465604311677c0c0b43adeacf4843a7ade66ca48"
	I0927 01:44:27.485130  756367 cri.go:89] found id: ""
	I0927 01:44:27.485154  756367 logs.go:276] 1 containers: [f74f5a74223b633c3ca99ac7465604311677c0c0b43adeacf4843a7ade66ca48]
	I0927 01:44:27.485247  756367 ssh_runner.go:195] Run: which crictl
	I0927 01:44:27.489063  756367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:27.489133  756367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:27.547438  756367 cri.go:89] found id: "7a82c31e2981413dba52726ae93a9b6786429d4055c821906975f6a58bb6787c"
	I0927 01:44:27.547463  756367 cri.go:89] found id: ""
	I0927 01:44:27.547471  756367 logs.go:276] 1 containers: [7a82c31e2981413dba52726ae93a9b6786429d4055c821906975f6a58bb6787c]
	I0927 01:44:27.547523  756367 ssh_runner.go:195] Run: which crictl
	I0927 01:44:27.551928  756367 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0927 01:44:27.551996  756367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0927 01:44:27.598982  756367 cri.go:89] found id: "138c41837cf0bbda88ce0b41c3d306956ef57ff9c2cb680fa71af1f16e609832"
	I0927 01:44:27.599002  756367 cri.go:89] found id: ""
	I0927 01:44:27.599009  756367 logs.go:276] 1 containers: [138c41837cf0bbda88ce0b41c3d306956ef57ff9c2cb680fa71af1f16e609832]
	I0927 01:44:27.599063  756367 ssh_runner.go:195] Run: which crictl
	I0927 01:44:27.609032  756367 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:27.609060  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 01:44:27.686472  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:00 old-k8s-version-745133 kubelet[736]: E0927 01:39:00.276608     736 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-745133" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-745133' and this object
	W0927 01:44:27.686776  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:00 old-k8s-version-745133 kubelet[736]: E0927 01:39:00.277156     736 reflector.go:138] object-"kube-system"/"kube-proxy-token-mdl25": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-mdl25" is forbidden: User "system:node:old-k8s-version-745133" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-745133' and this object
	W0927 01:44:27.687011  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:00 old-k8s-version-745133 kubelet[736]: E0927 01:39:00.277414     736 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-745133" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-745133' and this object
	W0927 01:44:27.687250  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:00 old-k8s-version-745133 kubelet[736]: E0927 01:39:00.277602     736 reflector.go:138] object-"kube-system"/"kindnet-token-jwlc6": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-jwlc6" is forbidden: User "system:node:old-k8s-version-745133" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-745133' and this object
	W0927 01:44:27.687495  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:00 old-k8s-version-745133 kubelet[736]: E0927 01:39:00.277796     736 reflector.go:138] object-"kube-system"/"coredns-token-k4cmv": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-k4cmv" is forbidden: User "system:node:old-k8s-version-745133" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-745133' and this object
	W0927 01:44:27.687749  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:00 old-k8s-version-745133 kubelet[736]: E0927 01:39:00.277995     736 reflector.go:138] object-"kube-system"/"storage-provisioner-token-tgp2f": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-tgp2f" is forbidden: User "system:node:old-k8s-version-745133" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-745133' and this object
	W0927 01:44:27.687997  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:00 old-k8s-version-745133 kubelet[736]: E0927 01:39:00.278201     736 reflector.go:138] object-"kube-system"/"metrics-server-token-9xfpw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-9xfpw" is forbidden: User "system:node:old-k8s-version-745133" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-745133' and this object
	W0927 01:44:27.688232  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:00 old-k8s-version-745133 kubelet[736]: E0927 01:39:00.278383     736 reflector.go:138] object-"default"/"default-token-lm75v": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-lm75v" is forbidden: User "system:node:old-k8s-version-745133" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-745133' and this object
	W0927 01:44:27.697891  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:03 old-k8s-version-745133 kubelet[736]: E0927 01:39:03.038144     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0927 01:44:27.698105  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:03 old-k8s-version-745133 kubelet[736]: E0927 01:39:03.733711     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:27.700305  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:17 old-k8s-version-745133 kubelet[736]: E0927 01:39:17.695628     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0927 01:44:27.700757  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:19 old-k8s-version-745133 kubelet[736]: E0927 01:39:19.799354     736 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-hcwf2": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-hcwf2" is forbidden: User "system:node:old-k8s-version-745133" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-745133' and this object
	W0927 01:44:27.703093  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:28 old-k8s-version-745133 kubelet[736]: E0927 01:39:28.679675     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:27.703606  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:31 old-k8s-version-745133 kubelet[736]: E0927 01:39:31.971044     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.704100  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:32 old-k8s-version-745133 kubelet[736]: E0927 01:39:32.973106     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.704469  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:39 old-k8s-version-745133 kubelet[736]: E0927 01:39:39.607023     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.706606  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:43 old-k8s-version-745133 kubelet[736]: E0927 01:39:43.686128     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0927 01:44:27.707301  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:55 old-k8s-version-745133 kubelet[736]: E0927 01:39:55.013155     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.707523  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:58 old-k8s-version-745133 kubelet[736]: E0927 01:39:58.675275     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:27.707878  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:59 old-k8s-version-745133 kubelet[736]: E0927 01:39:59.607420     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.708088  756367 logs.go:138] Found kubelet problem: Sep 27 01:40:09 old-k8s-version-745133 kubelet[736]: E0927 01:40:09.675710     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:27.708444  756367 logs.go:138] Found kubelet problem: Sep 27 01:40:11 old-k8s-version-745133 kubelet[736]: E0927 01:40:11.674655     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.709077  756367 logs.go:138] Found kubelet problem: Sep 27 01:40:24 old-k8s-version-745133 kubelet[736]: E0927 01:40:24.053848     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.711317  756367 logs.go:138] Found kubelet problem: Sep 27 01:40:24 old-k8s-version-745133 kubelet[736]: E0927 01:40:24.685441     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0927 01:44:27.711682  756367 logs.go:138] Found kubelet problem: Sep 27 01:40:29 old-k8s-version-745133 kubelet[736]: E0927 01:40:29.607048     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.711894  756367 logs.go:138] Found kubelet problem: Sep 27 01:40:38 old-k8s-version-745133 kubelet[736]: E0927 01:40:38.675142     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:27.712250  756367 logs.go:138] Found kubelet problem: Sep 27 01:40:43 old-k8s-version-745133 kubelet[736]: E0927 01:40:43.674664     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.712463  756367 logs.go:138] Found kubelet problem: Sep 27 01:40:50 old-k8s-version-745133 kubelet[736]: E0927 01:40:50.675165     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:27.712820  756367 logs.go:138] Found kubelet problem: Sep 27 01:40:57 old-k8s-version-745133 kubelet[736]: E0927 01:40:57.675462     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.713034  756367 logs.go:138] Found kubelet problem: Sep 27 01:41:04 old-k8s-version-745133 kubelet[736]: E0927 01:41:04.679852     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:27.713655  756367 logs.go:138] Found kubelet problem: Sep 27 01:41:11 old-k8s-version-745133 kubelet[736]: E0927 01:41:11.121508     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.713899  756367 logs.go:138] Found kubelet problem: Sep 27 01:41:17 old-k8s-version-745133 kubelet[736]: E0927 01:41:17.676076     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:27.714253  756367 logs.go:138] Found kubelet problem: Sep 27 01:41:19 old-k8s-version-745133 kubelet[736]: E0927 01:41:19.607004     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.714545  756367 logs.go:138] Found kubelet problem: Sep 27 01:41:28 old-k8s-version-745133 kubelet[736]: E0927 01:41:28.675474     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:27.714933  756367 logs.go:138] Found kubelet problem: Sep 27 01:41:33 old-k8s-version-745133 kubelet[736]: E0927 01:41:33.674670     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.715143  756367 logs.go:138] Found kubelet problem: Sep 27 01:41:41 old-k8s-version-745133 kubelet[736]: E0927 01:41:41.675136     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:27.715561  756367 logs.go:138] Found kubelet problem: Sep 27 01:41:46 old-k8s-version-745133 kubelet[736]: E0927 01:41:46.674672     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.721157  756367 logs.go:138] Found kubelet problem: Sep 27 01:41:55 old-k8s-version-745133 kubelet[736]: E0927 01:41:55.687901     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0927 01:44:27.721536  756367 logs.go:138] Found kubelet problem: Sep 27 01:42:01 old-k8s-version-745133 kubelet[736]: E0927 01:42:01.675876     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.721796  756367 logs.go:138] Found kubelet problem: Sep 27 01:42:10 old-k8s-version-745133 kubelet[736]: E0927 01:42:10.675343     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:27.722157  756367 logs.go:138] Found kubelet problem: Sep 27 01:42:13 old-k8s-version-745133 kubelet[736]: E0927 01:42:13.675175     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.722367  756367 logs.go:138] Found kubelet problem: Sep 27 01:42:22 old-k8s-version-745133 kubelet[736]: E0927 01:42:22.675195     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:27.722740  756367 logs.go:138] Found kubelet problem: Sep 27 01:42:26 old-k8s-version-745133 kubelet[736]: E0927 01:42:26.674767     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.722949  756367 logs.go:138] Found kubelet problem: Sep 27 01:42:33 old-k8s-version-745133 kubelet[736]: E0927 01:42:33.675487     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:27.723571  756367 logs.go:138] Found kubelet problem: Sep 27 01:42:42 old-k8s-version-745133 kubelet[736]: E0927 01:42:42.264610     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.723782  756367 logs.go:138] Found kubelet problem: Sep 27 01:42:47 old-k8s-version-745133 kubelet[736]: E0927 01:42:47.675472     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:27.724146  756367 logs.go:138] Found kubelet problem: Sep 27 01:42:49 old-k8s-version-745133 kubelet[736]: E0927 01:42:49.615148     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.724507  756367 logs.go:138] Found kubelet problem: Sep 27 01:43:01 old-k8s-version-745133 kubelet[736]: E0927 01:43:01.674781     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.724717  756367 logs.go:138] Found kubelet problem: Sep 27 01:43:01 old-k8s-version-745133 kubelet[736]: E0927 01:43:01.675864     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:27.724926  756367 logs.go:138] Found kubelet problem: Sep 27 01:43:13 old-k8s-version-745133 kubelet[736]: E0927 01:43:13.675722     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:27.725397  756367 logs.go:138] Found kubelet problem: Sep 27 01:43:15 old-k8s-version-745133 kubelet[736]: E0927 01:43:15.674635     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.725777  756367 logs.go:138] Found kubelet problem: Sep 27 01:43:26 old-k8s-version-745133 kubelet[736]: E0927 01:43:26.674663     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.725988  756367 logs.go:138] Found kubelet problem: Sep 27 01:43:28 old-k8s-version-745133 kubelet[736]: E0927 01:43:28.675242     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:27.726198  756367 logs.go:138] Found kubelet problem: Sep 27 01:43:39 old-k8s-version-745133 kubelet[736]: E0927 01:43:39.675681     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:27.726553  756367 logs.go:138] Found kubelet problem: Sep 27 01:43:40 old-k8s-version-745133 kubelet[736]: E0927 01:43:40.674684     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.727033  756367 logs.go:138] Found kubelet problem: Sep 27 01:43:50 old-k8s-version-745133 kubelet[736]: E0927 01:43:50.675166     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:27.727415  756367 logs.go:138] Found kubelet problem: Sep 27 01:43:54 old-k8s-version-745133 kubelet[736]: E0927 01:43:54.675191     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.727656  756367 logs.go:138] Found kubelet problem: Sep 27 01:44:02 old-k8s-version-745133 kubelet[736]: E0927 01:44:02.676623     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:27.728017  756367 logs.go:138] Found kubelet problem: Sep 27 01:44:06 old-k8s-version-745133 kubelet[736]: E0927 01:44:06.674628     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.728228  756367 logs.go:138] Found kubelet problem: Sep 27 01:44:17 old-k8s-version-745133 kubelet[736]: E0927 01:44:17.675590     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:27.728616  756367 logs.go:138] Found kubelet problem: Sep 27 01:44:21 old-k8s-version-745133 kubelet[736]: E0927 01:44:21.674844     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	I0927 01:44:27.728630  756367 logs.go:123] Gathering logs for kube-controller-manager [fd21817034d06ff8ec30b1fa5089ddfd190af71c8dfdcde0d32fd61181caafaa] ...
	I0927 01:44:27.728645  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd21817034d06ff8ec30b1fa5089ddfd190af71c8dfdcde0d32fd61181caafaa"
	I0927 01:44:27.845793  756367 logs.go:123] Gathering logs for kindnet [f74f5a74223b633c3ca99ac7465604311677c0c0b43adeacf4843a7ade66ca48] ...
	I0927 01:44:27.845828  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f74f5a74223b633c3ca99ac7465604311677c0c0b43adeacf4843a7ade66ca48"
	I0927 01:44:27.929600  756367 logs.go:123] Gathering logs for kubernetes-dashboard [7a82c31e2981413dba52726ae93a9b6786429d4055c821906975f6a58bb6787c] ...
	I0927 01:44:27.929628  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a82c31e2981413dba52726ae93a9b6786429d4055c821906975f6a58bb6787c"
	I0927 01:44:28.012427  756367 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:28.012457  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:28.034915  756367 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:28.034993  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:28.149972  756367 logs.go:123] Gathering logs for kube-apiserver [728631fe1253bab9992c7a58f88fca5a34491a3d06d1a6601e0e70566e7d10f4] ...
	I0927 01:44:28.150052  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 728631fe1253bab9992c7a58f88fca5a34491a3d06d1a6601e0e70566e7d10f4"
	I0927 01:44:28.283007  756367 logs.go:123] Gathering logs for kube-scheduler [1399d95796260c7e27b7a91a70576c5a3e3bfcfee9fb91839ddfe1b01c5114c0] ...
	I0927 01:44:28.283106  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1399d95796260c7e27b7a91a70576c5a3e3bfcfee9fb91839ddfe1b01c5114c0"
	I0927 01:44:28.353690  756367 logs.go:123] Gathering logs for kube-proxy [1f91c6f77f281dbe05880609e73acc353cb7ee468afa455c7d22c45e9428661e] ...
	I0927 01:44:28.353785  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f91c6f77f281dbe05880609e73acc353cb7ee468afa455c7d22c45e9428661e"
	I0927 01:44:28.428293  756367 logs.go:123] Gathering logs for container status ...
	I0927 01:44:28.428328  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:28.500951  756367 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:28.501042  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 01:44:28.691339  756367 logs.go:123] Gathering logs for etcd [fbfae056e26dd1b15e6b109b732296bcb89c5db7fabc0f8958574a0fc1248e81] ...
	I0927 01:44:28.691372  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbfae056e26dd1b15e6b109b732296bcb89c5db7fabc0f8958574a0fc1248e81"
	I0927 01:44:28.752406  756367 logs.go:123] Gathering logs for coredns [6a28aab70700044c53668d4eceed46b67fba50e2d134c97b4c5cdd9f83c81e4a] ...
	I0927 01:44:28.752437  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a28aab70700044c53668d4eceed46b67fba50e2d134c97b4c5cdd9f83c81e4a"
	I0927 01:44:28.791959  756367 logs.go:123] Gathering logs for storage-provisioner [138c41837cf0bbda88ce0b41c3d306956ef57ff9c2cb680fa71af1f16e609832] ...
	I0927 01:44:28.791991  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 138c41837cf0bbda88ce0b41c3d306956ef57ff9c2cb680fa71af1f16e609832"
	I0927 01:44:28.833586  756367 out.go:358] Setting ErrFile to fd 2...
	I0927 01:44:28.833613  756367 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 01:44:28.833684  756367 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0927 01:44:28.833702  756367 out.go:270]   Sep 27 01:43:54 old-k8s-version-745133 kubelet[736]: E0927 01:43:54.675191     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	  Sep 27 01:43:54 old-k8s-version-745133 kubelet[736]: E0927 01:43:54.675191     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:28.833710  756367 out.go:270]   Sep 27 01:44:02 old-k8s-version-745133 kubelet[736]: E0927 01:44:02.676623     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 27 01:44:02 old-k8s-version-745133 kubelet[736]: E0927 01:44:02.676623     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:28.833752  756367 out.go:270]   Sep 27 01:44:06 old-k8s-version-745133 kubelet[736]: E0927 01:44:06.674628     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	  Sep 27 01:44:06 old-k8s-version-745133 kubelet[736]: E0927 01:44:06.674628     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:28.833765  756367 out.go:270]   Sep 27 01:44:17 old-k8s-version-745133 kubelet[736]: E0927 01:44:17.675590     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 27 01:44:17 old-k8s-version-745133 kubelet[736]: E0927 01:44:17.675590     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:28.833773  756367 out.go:270]   Sep 27 01:44:21 old-k8s-version-745133 kubelet[736]: E0927 01:44:21.674844     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	  Sep 27 01:44:21 old-k8s-version-745133 kubelet[736]: E0927 01:44:21.674844     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	I0927 01:44:28.833795  756367 out.go:358] Setting ErrFile to fd 2...
	I0927 01:44:28.833815  756367 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:44:38.835711  756367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:38.852118  756367 api_server.go:72] duration metric: took 5m59.130981288s to wait for apiserver process to appear ...
	I0927 01:44:38.852145  756367 api_server.go:88] waiting for apiserver healthz status ...
	I0927 01:44:38.852204  756367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:38.852315  756367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:38.922459  756367 cri.go:89] found id: "728631fe1253bab9992c7a58f88fca5a34491a3d06d1a6601e0e70566e7d10f4"
	I0927 01:44:38.922483  756367 cri.go:89] found id: ""
	I0927 01:44:38.922490  756367 logs.go:276] 1 containers: [728631fe1253bab9992c7a58f88fca5a34491a3d06d1a6601e0e70566e7d10f4]
	I0927 01:44:38.922545  756367 ssh_runner.go:195] Run: which crictl
	I0927 01:44:38.927062  756367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:38.927133  756367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:38.981224  756367 cri.go:89] found id: "fbfae056e26dd1b15e6b109b732296bcb89c5db7fabc0f8958574a0fc1248e81"
	I0927 01:44:38.981250  756367 cri.go:89] found id: ""
	I0927 01:44:38.981259  756367 logs.go:276] 1 containers: [fbfae056e26dd1b15e6b109b732296bcb89c5db7fabc0f8958574a0fc1248e81]
	I0927 01:44:38.981318  756367 ssh_runner.go:195] Run: which crictl
	I0927 01:44:38.986555  756367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:38.986636  756367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:39.039294  756367 cri.go:89] found id: "6a28aab70700044c53668d4eceed46b67fba50e2d134c97b4c5cdd9f83c81e4a"
	I0927 01:44:39.039320  756367 cri.go:89] found id: ""
	I0927 01:44:39.039328  756367 logs.go:276] 1 containers: [6a28aab70700044c53668d4eceed46b67fba50e2d134c97b4c5cdd9f83c81e4a]
	I0927 01:44:39.039386  756367 ssh_runner.go:195] Run: which crictl
	I0927 01:44:39.043454  756367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:39.043529  756367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:39.104793  756367 cri.go:89] found id: "1399d95796260c7e27b7a91a70576c5a3e3bfcfee9fb91839ddfe1b01c5114c0"
	I0927 01:44:39.104822  756367 cri.go:89] found id: ""
	I0927 01:44:39.104831  756367 logs.go:276] 1 containers: [1399d95796260c7e27b7a91a70576c5a3e3bfcfee9fb91839ddfe1b01c5114c0]
	I0927 01:44:39.104884  756367 ssh_runner.go:195] Run: which crictl
	I0927 01:44:39.110103  756367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:39.110168  756367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:39.174469  756367 cri.go:89] found id: "1f91c6f77f281dbe05880609e73acc353cb7ee468afa455c7d22c45e9428661e"
	I0927 01:44:39.174497  756367 cri.go:89] found id: ""
	I0927 01:44:39.174505  756367 logs.go:276] 1 containers: [1f91c6f77f281dbe05880609e73acc353cb7ee468afa455c7d22c45e9428661e]
	I0927 01:44:39.174562  756367 ssh_runner.go:195] Run: which crictl
	I0927 01:44:39.178544  756367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:39.178620  756367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:39.229152  756367 cri.go:89] found id: "fd21817034d06ff8ec30b1fa5089ddfd190af71c8dfdcde0d32fd61181caafaa"
	I0927 01:44:39.229176  756367 cri.go:89] found id: ""
	I0927 01:44:39.229184  756367 logs.go:276] 1 containers: [fd21817034d06ff8ec30b1fa5089ddfd190af71c8dfdcde0d32fd61181caafaa]
	I0927 01:44:39.229244  756367 ssh_runner.go:195] Run: which crictl
	I0927 01:44:39.232807  756367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:39.232877  756367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:39.273216  756367 cri.go:89] found id: "f74f5a74223b633c3ca99ac7465604311677c0c0b43adeacf4843a7ade66ca48"
	I0927 01:44:39.273239  756367 cri.go:89] found id: ""
	I0927 01:44:39.273247  756367 logs.go:276] 1 containers: [f74f5a74223b633c3ca99ac7465604311677c0c0b43adeacf4843a7ade66ca48]
	I0927 01:44:39.273305  756367 ssh_runner.go:195] Run: which crictl
	I0927 01:44:39.277150  756367 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0927 01:44:39.277226  756367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0927 01:44:39.315705  756367 cri.go:89] found id: "138c41837cf0bbda88ce0b41c3d306956ef57ff9c2cb680fa71af1f16e609832"
	I0927 01:44:39.315727  756367 cri.go:89] found id: ""
	I0927 01:44:39.315734  756367 logs.go:276] 1 containers: [138c41837cf0bbda88ce0b41c3d306956ef57ff9c2cb680fa71af1f16e609832]
	I0927 01:44:39.315791  756367 ssh_runner.go:195] Run: which crictl
	I0927 01:44:39.319422  756367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:39.319493  756367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:39.374266  756367 cri.go:89] found id: "7a82c31e2981413dba52726ae93a9b6786429d4055c821906975f6a58bb6787c"
	I0927 01:44:39.374289  756367 cri.go:89] found id: ""
	I0927 01:44:39.374297  756367 logs.go:276] 1 containers: [7a82c31e2981413dba52726ae93a9b6786429d4055c821906975f6a58bb6787c]
	I0927 01:44:39.374359  756367 ssh_runner.go:195] Run: which crictl
	I0927 01:44:39.378605  756367 logs.go:123] Gathering logs for kindnet [f74f5a74223b633c3ca99ac7465604311677c0c0b43adeacf4843a7ade66ca48] ...
	I0927 01:44:39.378629  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f74f5a74223b633c3ca99ac7465604311677c0c0b43adeacf4843a7ade66ca48"
	I0927 01:44:39.431823  756367 logs.go:123] Gathering logs for kubernetes-dashboard [7a82c31e2981413dba52726ae93a9b6786429d4055c821906975f6a58bb6787c] ...
	I0927 01:44:39.431856  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a82c31e2981413dba52726ae93a9b6786429d4055c821906975f6a58bb6787c"
	I0927 01:44:39.476375  756367 logs.go:123] Gathering logs for container status ...
	I0927 01:44:39.476405  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:39.531425  756367 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:39.531456  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:39.548458  756367 logs.go:123] Gathering logs for kube-proxy [1f91c6f77f281dbe05880609e73acc353cb7ee468afa455c7d22c45e9428661e] ...
	I0927 01:44:39.548489  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f91c6f77f281dbe05880609e73acc353cb7ee468afa455c7d22c45e9428661e"
	I0927 01:44:39.587845  756367 logs.go:123] Gathering logs for kube-apiserver [728631fe1253bab9992c7a58f88fca5a34491a3d06d1a6601e0e70566e7d10f4] ...
	I0927 01:44:39.587877  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 728631fe1253bab9992c7a58f88fca5a34491a3d06d1a6601e0e70566e7d10f4"
	I0927 01:44:39.665236  756367 logs.go:123] Gathering logs for kube-scheduler [1399d95796260c7e27b7a91a70576c5a3e3bfcfee9fb91839ddfe1b01c5114c0] ...
	I0927 01:44:39.665275  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1399d95796260c7e27b7a91a70576c5a3e3bfcfee9fb91839ddfe1b01c5114c0"
	I0927 01:44:39.711276  756367 logs.go:123] Gathering logs for storage-provisioner [138c41837cf0bbda88ce0b41c3d306956ef57ff9c2cb680fa71af1f16e609832] ...
	I0927 01:44:39.711308  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 138c41837cf0bbda88ce0b41c3d306956ef57ff9c2cb680fa71af1f16e609832"
	I0927 01:44:39.751909  756367 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:39.751939  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:39.832973  756367 logs.go:123] Gathering logs for etcd [fbfae056e26dd1b15e6b109b732296bcb89c5db7fabc0f8958574a0fc1248e81] ...
	I0927 01:44:39.833012  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbfae056e26dd1b15e6b109b732296bcb89c5db7fabc0f8958574a0fc1248e81"
	I0927 01:44:39.887308  756367 logs.go:123] Gathering logs for coredns [6a28aab70700044c53668d4eceed46b67fba50e2d134c97b4c5cdd9f83c81e4a] ...
	I0927 01:44:39.887339  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a28aab70700044c53668d4eceed46b67fba50e2d134c97b4c5cdd9f83c81e4a"
	I0927 01:44:39.928050  756367 logs.go:123] Gathering logs for kube-controller-manager [fd21817034d06ff8ec30b1fa5089ddfd190af71c8dfdcde0d32fd61181caafaa] ...
	I0927 01:44:39.928079  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd21817034d06ff8ec30b1fa5089ddfd190af71c8dfdcde0d32fd61181caafaa"
	I0927 01:44:40.029021  756367 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:40.029073  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 01:44:40.096759  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:00 old-k8s-version-745133 kubelet[736]: E0927 01:39:00.276608     736 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-745133" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-745133' and this object
	W0927 01:44:40.097055  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:00 old-k8s-version-745133 kubelet[736]: E0927 01:39:00.277156     736 reflector.go:138] object-"kube-system"/"kube-proxy-token-mdl25": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-mdl25" is forbidden: User "system:node:old-k8s-version-745133" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-745133' and this object
	W0927 01:44:40.097340  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:00 old-k8s-version-745133 kubelet[736]: E0927 01:39:00.277414     736 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-745133" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-745133' and this object
	W0927 01:44:40.097591  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:00 old-k8s-version-745133 kubelet[736]: E0927 01:39:00.277602     736 reflector.go:138] object-"kube-system"/"kindnet-token-jwlc6": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-jwlc6" is forbidden: User "system:node:old-k8s-version-745133" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-745133' and this object
	W0927 01:44:40.097838  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:00 old-k8s-version-745133 kubelet[736]: E0927 01:39:00.277796     736 reflector.go:138] object-"kube-system"/"coredns-token-k4cmv": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-k4cmv" is forbidden: User "system:node:old-k8s-version-745133" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-745133' and this object
	W0927 01:44:40.098101  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:00 old-k8s-version-745133 kubelet[736]: E0927 01:39:00.277995     736 reflector.go:138] object-"kube-system"/"storage-provisioner-token-tgp2f": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-tgp2f" is forbidden: User "system:node:old-k8s-version-745133" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-745133' and this object
	W0927 01:44:40.098367  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:00 old-k8s-version-745133 kubelet[736]: E0927 01:39:00.278201     736 reflector.go:138] object-"kube-system"/"metrics-server-token-9xfpw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-9xfpw" is forbidden: User "system:node:old-k8s-version-745133" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-745133' and this object
	W0927 01:44:40.098599  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:00 old-k8s-version-745133 kubelet[736]: E0927 01:39:00.278383     736 reflector.go:138] object-"default"/"default-token-lm75v": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-lm75v" is forbidden: User "system:node:old-k8s-version-745133" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-745133' and this object
	W0927 01:44:40.108947  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:03 old-k8s-version-745133 kubelet[736]: E0927 01:39:03.038144     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0927 01:44:40.109179  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:03 old-k8s-version-745133 kubelet[736]: E0927 01:39:03.733711     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.111478  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:17 old-k8s-version-745133 kubelet[736]: E0927 01:39:17.695628     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0927 01:44:40.111949  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:19 old-k8s-version-745133 kubelet[736]: E0927 01:39:19.799354     736 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-hcwf2": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-hcwf2" is forbidden: User "system:node:old-k8s-version-745133" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-745133' and this object
	W0927 01:44:40.113700  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:28 old-k8s-version-745133 kubelet[736]: E0927 01:39:28.679675     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.114248  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:31 old-k8s-version-745133 kubelet[736]: E0927 01:39:31.971044     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.114769  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:32 old-k8s-version-745133 kubelet[736]: E0927 01:39:32.973106     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.115110  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:39 old-k8s-version-745133 kubelet[736]: E0927 01:39:39.607023     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.117338  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:43 old-k8s-version-745133 kubelet[736]: E0927 01:39:43.686128     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0927 01:44:40.117990  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:55 old-k8s-version-745133 kubelet[736]: E0927 01:39:55.013155     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.118191  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:58 old-k8s-version-745133 kubelet[736]: E0927 01:39:58.675275     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.118552  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:59 old-k8s-version-745133 kubelet[736]: E0927 01:39:59.607420     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.118760  756367 logs.go:138] Found kubelet problem: Sep 27 01:40:09 old-k8s-version-745133 kubelet[736]: E0927 01:40:09.675710     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.119110  756367 logs.go:138] Found kubelet problem: Sep 27 01:40:11 old-k8s-version-745133 kubelet[736]: E0927 01:40:11.674655     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.119744  756367 logs.go:138] Found kubelet problem: Sep 27 01:40:24 old-k8s-version-745133 kubelet[736]: E0927 01:40:24.053848     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.121980  756367 logs.go:138] Found kubelet problem: Sep 27 01:40:24 old-k8s-version-745133 kubelet[736]: E0927 01:40:24.685441     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0927 01:44:40.122341  756367 logs.go:138] Found kubelet problem: Sep 27 01:40:29 old-k8s-version-745133 kubelet[736]: E0927 01:40:29.607048     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.122545  756367 logs.go:138] Found kubelet problem: Sep 27 01:40:38 old-k8s-version-745133 kubelet[736]: E0927 01:40:38.675142     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.122921  756367 logs.go:138] Found kubelet problem: Sep 27 01:40:43 old-k8s-version-745133 kubelet[736]: E0927 01:40:43.674664     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.123130  756367 logs.go:138] Found kubelet problem: Sep 27 01:40:50 old-k8s-version-745133 kubelet[736]: E0927 01:40:50.675165     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.123528  756367 logs.go:138] Found kubelet problem: Sep 27 01:40:57 old-k8s-version-745133 kubelet[736]: E0927 01:40:57.675462     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.123736  756367 logs.go:138] Found kubelet problem: Sep 27 01:41:04 old-k8s-version-745133 kubelet[736]: E0927 01:41:04.679852     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.124391  756367 logs.go:138] Found kubelet problem: Sep 27 01:41:11 old-k8s-version-745133 kubelet[736]: E0927 01:41:11.121508     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.124607  756367 logs.go:138] Found kubelet problem: Sep 27 01:41:17 old-k8s-version-745133 kubelet[736]: E0927 01:41:17.676076     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.124961  756367 logs.go:138] Found kubelet problem: Sep 27 01:41:19 old-k8s-version-745133 kubelet[736]: E0927 01:41:19.607004     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.125167  756367 logs.go:138] Found kubelet problem: Sep 27 01:41:28 old-k8s-version-745133 kubelet[736]: E0927 01:41:28.675474     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.125540  756367 logs.go:138] Found kubelet problem: Sep 27 01:41:33 old-k8s-version-745133 kubelet[736]: E0927 01:41:33.674670     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.125751  756367 logs.go:138] Found kubelet problem: Sep 27 01:41:41 old-k8s-version-745133 kubelet[736]: E0927 01:41:41.675136     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.126107  756367 logs.go:138] Found kubelet problem: Sep 27 01:41:46 old-k8s-version-745133 kubelet[736]: E0927 01:41:46.674672     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.128445  756367 logs.go:138] Found kubelet problem: Sep 27 01:41:55 old-k8s-version-745133 kubelet[736]: E0927 01:41:55.687901     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0927 01:44:40.128841  756367 logs.go:138] Found kubelet problem: Sep 27 01:42:01 old-k8s-version-745133 kubelet[736]: E0927 01:42:01.675876     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.129096  756367 logs.go:138] Found kubelet problem: Sep 27 01:42:10 old-k8s-version-745133 kubelet[736]: E0927 01:42:10.675343     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.129444  756367 logs.go:138] Found kubelet problem: Sep 27 01:42:13 old-k8s-version-745133 kubelet[736]: E0927 01:42:13.675175     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.129653  756367 logs.go:138] Found kubelet problem: Sep 27 01:42:22 old-k8s-version-745133 kubelet[736]: E0927 01:42:22.675195     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.130012  756367 logs.go:138] Found kubelet problem: Sep 27 01:42:26 old-k8s-version-745133 kubelet[736]: E0927 01:42:26.674767     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.130224  756367 logs.go:138] Found kubelet problem: Sep 27 01:42:33 old-k8s-version-745133 kubelet[736]: E0927 01:42:33.675487     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.130896  756367 logs.go:138] Found kubelet problem: Sep 27 01:42:42 old-k8s-version-745133 kubelet[736]: E0927 01:42:42.264610     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.131105  756367 logs.go:138] Found kubelet problem: Sep 27 01:42:47 old-k8s-version-745133 kubelet[736]: E0927 01:42:47.675472     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.131468  756367 logs.go:138] Found kubelet problem: Sep 27 01:42:49 old-k8s-version-745133 kubelet[736]: E0927 01:42:49.615148     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.131829  756367 logs.go:138] Found kubelet problem: Sep 27 01:43:01 old-k8s-version-745133 kubelet[736]: E0927 01:43:01.674781     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.132045  756367 logs.go:138] Found kubelet problem: Sep 27 01:43:01 old-k8s-version-745133 kubelet[736]: E0927 01:43:01.675864     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.132240  756367 logs.go:138] Found kubelet problem: Sep 27 01:43:13 old-k8s-version-745133 kubelet[736]: E0927 01:43:13.675722     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.132601  756367 logs.go:138] Found kubelet problem: Sep 27 01:43:15 old-k8s-version-745133 kubelet[736]: E0927 01:43:15.674635     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.132997  756367 logs.go:138] Found kubelet problem: Sep 27 01:43:26 old-k8s-version-745133 kubelet[736]: E0927 01:43:26.674663     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.133196  756367 logs.go:138] Found kubelet problem: Sep 27 01:43:28 old-k8s-version-745133 kubelet[736]: E0927 01:43:28.675242     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.133410  756367 logs.go:138] Found kubelet problem: Sep 27 01:43:39 old-k8s-version-745133 kubelet[736]: E0927 01:43:39.675681     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.133777  756367 logs.go:138] Found kubelet problem: Sep 27 01:43:40 old-k8s-version-745133 kubelet[736]: E0927 01:43:40.674684     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.134273  756367 logs.go:138] Found kubelet problem: Sep 27 01:43:50 old-k8s-version-745133 kubelet[736]: E0927 01:43:50.675166     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.134641  756367 logs.go:138] Found kubelet problem: Sep 27 01:43:54 old-k8s-version-745133 kubelet[736]: E0927 01:43:54.675191     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.134858  756367 logs.go:138] Found kubelet problem: Sep 27 01:44:02 old-k8s-version-745133 kubelet[736]: E0927 01:44:02.676623     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.135214  756367 logs.go:138] Found kubelet problem: Sep 27 01:44:06 old-k8s-version-745133 kubelet[736]: E0927 01:44:06.674628     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.135431  756367 logs.go:138] Found kubelet problem: Sep 27 01:44:17 old-k8s-version-745133 kubelet[736]: E0927 01:44:17.675590     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.135825  756367 logs.go:138] Found kubelet problem: Sep 27 01:44:21 old-k8s-version-745133 kubelet[736]: E0927 01:44:21.674844     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.136054  756367 logs.go:138] Found kubelet problem: Sep 27 01:44:30 old-k8s-version-745133 kubelet[736]: E0927 01:44:30.675344     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.136427  756367 logs.go:138] Found kubelet problem: Sep 27 01:44:34 old-k8s-version-745133 kubelet[736]: E0927 01:44:34.674650     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	I0927 01:44:40.136460  756367 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:40.136492  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 01:44:40.285959  756367 out.go:358] Setting ErrFile to fd 2...
	I0927 01:44:40.285985  756367 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 01:44:40.286068  756367 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0927 01:44:40.286083  756367 out.go:270]   Sep 27 01:44:06 old-k8s-version-745133 kubelet[736]: E0927 01:44:06.674628     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	  Sep 27 01:44:06 old-k8s-version-745133 kubelet[736]: E0927 01:44:06.674628     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.286120  756367 out.go:270]   Sep 27 01:44:17 old-k8s-version-745133 kubelet[736]: E0927 01:44:17.675590     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 27 01:44:17 old-k8s-version-745133 kubelet[736]: E0927 01:44:17.675590     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.286137  756367 out.go:270]   Sep 27 01:44:21 old-k8s-version-745133 kubelet[736]: E0927 01:44:21.674844     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	  Sep 27 01:44:21 old-k8s-version-745133 kubelet[736]: E0927 01:44:21.674844     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.286143  756367 out.go:270]   Sep 27 01:44:30 old-k8s-version-745133 kubelet[736]: E0927 01:44:30.675344     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 27 01:44:30 old-k8s-version-745133 kubelet[736]: E0927 01:44:30.675344     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.286152  756367 out.go:270]   Sep 27 01:44:34 old-k8s-version-745133 kubelet[736]: E0927 01:44:34.674650     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	  Sep 27 01:44:34 old-k8s-version-745133 kubelet[736]: E0927 01:44:34.674650     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	I0927 01:44:40.286163  756367 out.go:358] Setting ErrFile to fd 2...
	I0927 01:44:40.286170  756367 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:44:50.288102  756367 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0927 01:44:50.299610  756367 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0927 01:44:50.302485  756367 out.go:201] 
	W0927 01:44:50.305011  756367 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0927 01:44:50.305047  756367 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0927 01:44:50.305076  756367 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0927 01:44:50.305085  756367 out.go:270] * 
	* 
	W0927 01:44:50.305893  756367 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 01:44:50.309681  756367 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-745133 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-745133
helpers_test.go:235: (dbg) docker inspect old-k8s-version-745133:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1fb14722efe43a080319cb455e783513aeccc71eb22ae6ffe2a2fad7eb054cbd",
	        "Created": "2024-09-27T01:35:25.064266883Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 756563,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-27T01:38:31.261910405Z",
	            "FinishedAt": "2024-09-27T01:38:28.876395785Z"
	        },
	        "Image": "sha256:62002f6a97ad1f6cd4117c29b1c488a6bf3b6255c8231f0d600b1bc7ba1bcfd6",
	        "ResolvConfPath": "/var/lib/docker/containers/1fb14722efe43a080319cb455e783513aeccc71eb22ae6ffe2a2fad7eb054cbd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1fb14722efe43a080319cb455e783513aeccc71eb22ae6ffe2a2fad7eb054cbd/hostname",
	        "HostsPath": "/var/lib/docker/containers/1fb14722efe43a080319cb455e783513aeccc71eb22ae6ffe2a2fad7eb054cbd/hosts",
	        "LogPath": "/var/lib/docker/containers/1fb14722efe43a080319cb455e783513aeccc71eb22ae6ffe2a2fad7eb054cbd/1fb14722efe43a080319cb455e783513aeccc71eb22ae6ffe2a2fad7eb054cbd-json.log",
	        "Name": "/old-k8s-version-745133",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-745133:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-745133",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/8bd00b625bbb6997945479df7cdc39452a8ebf4d2a444713a51d41d8b0c82b9d-init/diff:/var/lib/docker/overlay2/e55adca0cb8a4469e5ee8e2f29139ff0ae0fed3b714ff629d2562144f224236f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8bd00b625bbb6997945479df7cdc39452a8ebf4d2a444713a51d41d8b0c82b9d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8bd00b625bbb6997945479df7cdc39452a8ebf4d2a444713a51d41d8b0c82b9d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8bd00b625bbb6997945479df7cdc39452a8ebf4d2a444713a51d41d8b0c82b9d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-745133",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-745133/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-745133",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-745133",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-745133",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4ef66e00abbceac5df55fa768e7140615a83b644288f2a6c80e0e29a25cd6c28",
	            "SandboxKey": "/var/run/docker/netns/4ef66e00abbc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33791"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33792"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33795"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33793"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33794"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-745133": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "9ae640e03652055bfc7013669fd70c59d93c7e909cf97d9a587545f2d16c1372",
	                    "EndpointID": "e739956adfd474db77cac93206c7cf67c79864d07944336508476417b2709596",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-745133",
	                        "1fb14722efe4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-745133 -n old-k8s-version-745133
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-745133 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-745133 logs -n 25: (1.878436505s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-075073 sudo cat                              | cilium-075073            | jenkins | v1.34.0 | 27 Sep 24 01:34 UTC |                     |
	|         | /etc/containerd/config.toml                            |                          |         |         |                     |                     |
	| ssh     | -p cilium-075073 sudo                                  | cilium-075073            | jenkins | v1.34.0 | 27 Sep 24 01:34 UTC |                     |
	|         | containerd config dump                                 |                          |         |         |                     |                     |
	| ssh     | -p cilium-075073 sudo                                  | cilium-075073            | jenkins | v1.34.0 | 27 Sep 24 01:34 UTC |                     |
	|         | systemctl status crio --all                            |                          |         |         |                     |                     |
	|         | --full --no-pager                                      |                          |         |         |                     |                     |
	| ssh     | -p cilium-075073 sudo                                  | cilium-075073            | jenkins | v1.34.0 | 27 Sep 24 01:34 UTC |                     |
	|         | systemctl cat crio --no-pager                          |                          |         |         |                     |                     |
	| ssh     | -p cilium-075073 sudo find                             | cilium-075073            | jenkins | v1.34.0 | 27 Sep 24 01:34 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                          |                          |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                          |         |         |                     |                     |
	| ssh     | -p cilium-075073 sudo crio                             | cilium-075073            | jenkins | v1.34.0 | 27 Sep 24 01:34 UTC |                     |
	|         | config                                                 |                          |         |         |                     |                     |
	| delete  | -p cilium-075073                                       | cilium-075073            | jenkins | v1.34.0 | 27 Sep 24 01:34 UTC | 27 Sep 24 01:34 UTC |
	| start   | -p cert-expiration-686343                              | cert-expiration-686343   | jenkins | v1.34.0 | 27 Sep 24 01:34 UTC | 27 Sep 24 01:34 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=crio                               |                          |         |         |                     |                     |
	| delete  | -p force-systemd-env-980399                            | force-systemd-env-980399 | jenkins | v1.34.0 | 27 Sep 24 01:34 UTC | 27 Sep 24 01:34 UTC |
	| start   | -p cert-options-617701                                 | cert-options-617701      | jenkins | v1.34.0 | 27 Sep 24 01:34 UTC | 27 Sep 24 01:35 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                          |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                          |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                          |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                          |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=crio                               |                          |         |         |                     |                     |
	| ssh     | cert-options-617701 ssh                                | cert-options-617701      | jenkins | v1.34.0 | 27 Sep 24 01:35 UTC | 27 Sep 24 01:35 UTC |
	|         | openssl x509 -text -noout -in                          |                          |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                          |         |         |                     |                     |
	| ssh     | -p cert-options-617701 -- sudo                         | cert-options-617701      | jenkins | v1.34.0 | 27 Sep 24 01:35 UTC | 27 Sep 24 01:35 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                          |         |         |                     |                     |
	| delete  | -p cert-options-617701                                 | cert-options-617701      | jenkins | v1.34.0 | 27 Sep 24 01:35 UTC | 27 Sep 24 01:35 UTC |
	| start   | -p old-k8s-version-745133                              | old-k8s-version-745133   | jenkins | v1.34.0 | 27 Sep 24 01:35 UTC | 27 Sep 24 01:38 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=crio                               |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| start   | -p cert-expiration-686343                              | cert-expiration-686343   | jenkins | v1.34.0 | 27 Sep 24 01:37 UTC | 27 Sep 24 01:38 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=crio                               |                          |         |         |                     |                     |
	| delete  | -p cert-expiration-686343                              | cert-expiration-686343   | jenkins | v1.34.0 | 27 Sep 24 01:38 UTC | 27 Sep 24 01:38 UTC |
	| start   | -p no-preload-874305                                   | no-preload-874305        | jenkins | v1.34.0 | 27 Sep 24 01:38 UTC | 27 Sep 24 01:39 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=crio                               |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-745133        | old-k8s-version-745133   | jenkins | v1.34.0 | 27 Sep 24 01:38 UTC | 27 Sep 24 01:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p old-k8s-version-745133                              | old-k8s-version-745133   | jenkins | v1.34.0 | 27 Sep 24 01:38 UTC | 27 Sep 24 01:38 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-745133             | old-k8s-version-745133   | jenkins | v1.34.0 | 27 Sep 24 01:38 UTC | 27 Sep 24 01:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p old-k8s-version-745133                              | old-k8s-version-745133   | jenkins | v1.34.0 | 27 Sep 24 01:38 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=crio                               |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-874305             | no-preload-874305        | jenkins | v1.34.0 | 27 Sep 24 01:39 UTC | 27 Sep 24 01:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p no-preload-874305                                   | no-preload-874305        | jenkins | v1.34.0 | 27 Sep 24 01:39 UTC | 27 Sep 24 01:39 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-874305                  | no-preload-874305        | jenkins | v1.34.0 | 27 Sep 24 01:39 UTC | 27 Sep 24 01:39 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p no-preload-874305                                   | no-preload-874305        | jenkins | v1.34.0 | 27 Sep 24 01:39 UTC | 27 Sep 24 01:44 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=crio                               |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                          |         |         |                     |                     |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 01:39:49
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 01:39:49.432202  760583 out.go:345] Setting OutFile to fd 1 ...
	I0927 01:39:49.432403  760583 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:39:49.432416  760583 out.go:358] Setting ErrFile to fd 2...
	I0927 01:39:49.432422  760583 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:39:49.432698  760583 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-553751/.minikube/bin
	I0927 01:39:49.433086  760583 out.go:352] Setting JSON to false
	I0927 01:39:49.434104  760583 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":19333,"bootTime":1727381857,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0927 01:39:49.434179  760583 start.go:139] virtualization:  
	I0927 01:39:49.438784  760583 out.go:177] * [no-preload-874305] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0927 01:39:49.441447  760583 notify.go:220] Checking for updates...
	I0927 01:39:49.442342  760583 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 01:39:49.445712  760583 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 01:39:49.448323  760583 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-553751/kubeconfig
	I0927 01:39:49.450814  760583 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-553751/.minikube
	I0927 01:39:49.453320  760583 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0927 01:39:49.455825  760583 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 01:39:49.458935  760583 config.go:182] Loaded profile config "no-preload-874305": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:39:49.459662  760583 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 01:39:49.488834  760583 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0927 01:39:49.488967  760583 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 01:39:49.548634  760583 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-27 01:39:49.538682777 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 01:39:49.548755  760583 docker.go:318] overlay module found
	I0927 01:39:49.552368  760583 out.go:177] * Using the docker driver based on existing profile
	I0927 01:39:49.555814  760583 start.go:297] selected driver: docker
	I0927 01:39:49.555834  760583 start.go:901] validating driver "docker" against &{Name:no-preload-874305 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-874305 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home
/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:39:49.555985  760583 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 01:39:49.556645  760583 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 01:39:49.607384  760583 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-27 01:39:49.597598576 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 01:39:49.607782  760583 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 01:39:49.607809  760583 cni.go:84] Creating CNI manager for ""
	I0927 01:39:49.607856  760583 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0927 01:39:49.607896  760583 start.go:340] cluster config:
	{Name:no-preload-874305 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-874305 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:39:49.610816  760583 out.go:177] * Starting "no-preload-874305" primary control-plane node in "no-preload-874305" cluster
	I0927 01:39:49.613661  760583 cache.go:121] Beginning downloading kic base image for docker with crio
	I0927 01:39:49.616349  760583 out.go:177] * Pulling base image v0.0.45-1727108449-19696 ...
	I0927 01:39:49.618897  760583 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 01:39:49.618984  760583 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0927 01:39:49.619057  760583 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/no-preload-874305/config.json ...
	I0927 01:39:49.619336  760583 cache.go:107] acquiring lock: {Name:mk287fddb7d994e156cef35db45336f739ba74e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 01:39:49.619413  760583 cache.go:115] /home/jenkins/minikube-integration/19711-553751/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0927 01:39:49.619421  760583 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19711-553751/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 90.148µs
	I0927 01:39:49.619430  760583 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19711-553751/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0927 01:39:49.619440  760583 cache.go:107] acquiring lock: {Name:mkb65f3b3b2406076523fec877261bcd2f104a13 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 01:39:49.619469  760583 cache.go:115] /home/jenkins/minikube-integration/19711-553751/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0927 01:39:49.619474  760583 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/home/jenkins/minikube-integration/19711-553751/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 35.757µs
	I0927 01:39:49.619480  760583 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /home/jenkins/minikube-integration/19711-553751/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0927 01:39:49.619490  760583 cache.go:107] acquiring lock: {Name:mkeacc8440e23d159f75b969c4efd8945365fce8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 01:39:49.619515  760583 cache.go:115] /home/jenkins/minikube-integration/19711-553751/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0927 01:39:49.619519  760583 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/home/jenkins/minikube-integration/19711-553751/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 31.663µs
	I0927 01:39:49.619526  760583 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /home/jenkins/minikube-integration/19711-553751/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0927 01:39:49.619534  760583 cache.go:107] acquiring lock: {Name:mkc2b9c18d8a0b2eddde982edeb93c3ddd646e50 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 01:39:49.619562  760583 cache.go:115] /home/jenkins/minikube-integration/19711-553751/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0927 01:39:49.619567  760583 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/home/jenkins/minikube-integration/19711-553751/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 33.78µs
	I0927 01:39:49.619574  760583 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /home/jenkins/minikube-integration/19711-553751/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0927 01:39:49.619591  760583 cache.go:107] acquiring lock: {Name:mk88b20634b405983ddbd1bb07b52ebc74d75efd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 01:39:49.619616  760583 cache.go:115] /home/jenkins/minikube-integration/19711-553751/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0927 01:39:49.619621  760583 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/home/jenkins/minikube-integration/19711-553751/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 38.736µs
	I0927 01:39:49.619626  760583 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /home/jenkins/minikube-integration/19711-553751/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0927 01:39:49.619635  760583 cache.go:107] acquiring lock: {Name:mk3fb3ab1c8383c33142c34ae822518e58790cdd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 01:39:49.619665  760583 cache.go:115] /home/jenkins/minikube-integration/19711-553751/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0927 01:39:49.619670  760583 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19711-553751/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 35.79µs
	I0927 01:39:49.619676  760583 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19711-553751/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0927 01:39:49.619685  760583 cache.go:107] acquiring lock: {Name:mk553e852f036f5252f8d70bf3319da7e0cf5ed9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 01:39:49.619708  760583 cache.go:115] /home/jenkins/minikube-integration/19711-553751/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0927 01:39:49.619713  760583 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/home/jenkins/minikube-integration/19711-553751/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 29.168µs
	I0927 01:39:49.619719  760583 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /home/jenkins/minikube-integration/19711-553751/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0927 01:39:49.619727  760583 cache.go:107] acquiring lock: {Name:mkdcbbf974f3e51ff7c8bb7e74b309002c26fad1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 01:39:49.619752  760583 cache.go:115] /home/jenkins/minikube-integration/19711-553751/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0927 01:39:49.619756  760583 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/19711-553751/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 30.498µs
	I0927 01:39:49.619762  760583 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/19711-553751/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0927 01:39:49.619771  760583 cache.go:87] Successfully saved all images to host disk.
	I0927 01:39:49.639655  760583 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon, skipping pull
	I0927 01:39:49.639679  760583 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 exists in daemon, skipping load
	I0927 01:39:49.639699  760583 cache.go:194] Successfully downloaded all kic artifacts
	I0927 01:39:49.639726  760583 start.go:360] acquireMachinesLock for no-preload-874305: {Name:mk82c07c5f3bbfefcb29edf4c157dce4718ce2c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 01:39:49.639791  760583 start.go:364] duration metric: took 44.298µs to acquireMachinesLock for "no-preload-874305"
	I0927 01:39:49.639813  760583 start.go:96] Skipping create...Using existing machine configuration
	I0927 01:39:49.639820  760583 fix.go:54] fixHost starting: 
	I0927 01:39:49.640078  760583 cli_runner.go:164] Run: docker container inspect no-preload-874305 --format={{.State.Status}}
	I0927 01:39:49.656239  760583 fix.go:112] recreateIfNeeded on no-preload-874305: state=Stopped err=<nil>
	W0927 01:39:49.656265  760583 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 01:39:49.660879  760583 out.go:177] * Restarting existing docker container for "no-preload-874305" ...
	I0927 01:39:46.056610  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:39:48.060648  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:39:50.067560  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:39:49.663488  760583 cli_runner.go:164] Run: docker start no-preload-874305
	I0927 01:39:49.948460  760583 cli_runner.go:164] Run: docker container inspect no-preload-874305 --format={{.State.Status}}
	I0927 01:39:49.970297  760583 kic.go:430] container "no-preload-874305" state is running.
	I0927 01:39:49.970856  760583 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-874305
	I0927 01:39:49.988592  760583 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/no-preload-874305/config.json ...
	I0927 01:39:49.988833  760583 machine.go:93] provisionDockerMachine start ...
	I0927 01:39:49.988902  760583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-874305
	I0927 01:39:50.020726  760583 main.go:141] libmachine: Using SSH client type: native
	I0927 01:39:50.021095  760583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33796 <nil> <nil>}
	I0927 01:39:50.021115  760583 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 01:39:50.022500  760583 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47788->127.0.0.1:33796: read: connection reset by peer
	I0927 01:39:53.162365  760583 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-874305
	
	I0927 01:39:53.162389  760583 ubuntu.go:169] provisioning hostname "no-preload-874305"
	I0927 01:39:53.162452  760583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-874305
	I0927 01:39:53.181425  760583 main.go:141] libmachine: Using SSH client type: native
	I0927 01:39:53.181676  760583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33796 <nil> <nil>}
	I0927 01:39:53.181694  760583 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-874305 && echo "no-preload-874305" | sudo tee /etc/hostname
	I0927 01:39:53.323042  760583 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-874305
	
	I0927 01:39:53.323150  760583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-874305
	I0927 01:39:53.341889  760583 main.go:141] libmachine: Using SSH client type: native
	I0927 01:39:53.342133  760583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33796 <nil> <nil>}
	I0927 01:39:53.342157  760583 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-874305' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-874305/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-874305' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 01:39:53.470934  760583 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 01:39:53.470959  760583 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19711-553751/.minikube CaCertPath:/home/jenkins/minikube-integration/19711-553751/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19711-553751/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19711-553751/.minikube}
	I0927 01:39:53.470992  760583 ubuntu.go:177] setting up certificates
	I0927 01:39:53.471002  760583 provision.go:84] configureAuth start
	I0927 01:39:53.471073  760583 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-874305
	I0927 01:39:53.489152  760583 provision.go:143] copyHostCerts
	I0927 01:39:53.489221  760583 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-553751/.minikube/ca.pem, removing ...
	I0927 01:39:53.489239  760583 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-553751/.minikube/ca.pem
	I0927 01:39:53.489315  760583 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-553751/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19711-553751/.minikube/ca.pem (1078 bytes)
	I0927 01:39:53.489418  760583 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-553751/.minikube/cert.pem, removing ...
	I0927 01:39:53.489423  760583 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-553751/.minikube/cert.pem
	I0927 01:39:53.489449  760583 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-553751/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19711-553751/.minikube/cert.pem (1123 bytes)
	I0927 01:39:53.489507  760583 exec_runner.go:144] found /home/jenkins/minikube-integration/19711-553751/.minikube/key.pem, removing ...
	I0927 01:39:53.489512  760583 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19711-553751/.minikube/key.pem
	I0927 01:39:53.489535  760583 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-553751/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19711-553751/.minikube/key.pem (1675 bytes)
	I0927 01:39:53.489588  760583 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19711-553751/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19711-553751/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19711-553751/.minikube/certs/ca-key.pem org=jenkins.no-preload-874305 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-874305]
	I0927 01:39:54.143075  760583 provision.go:177] copyRemoteCerts
	I0927 01:39:54.143195  760583 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 01:39:54.143269  760583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-874305
	I0927 01:39:54.161436  760583 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33796 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/no-preload-874305/id_rsa Username:docker}
	I0927 01:39:54.255570  760583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 01:39:54.280555  760583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0927 01:39:54.305959  760583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0927 01:39:54.331802  760583 provision.go:87] duration metric: took 860.787461ms to configureAuth
	I0927 01:39:54.331828  760583 ubuntu.go:193] setting minikube options for container-runtime
	I0927 01:39:54.332065  760583 config.go:182] Loaded profile config "no-preload-874305": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:39:54.332181  760583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-874305
	I0927 01:39:54.359508  760583 main.go:141] libmachine: Using SSH client type: native
	I0927 01:39:54.359749  760583 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33796 <nil> <nil>}
	I0927 01:39:54.359772  760583 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 01:39:52.556434  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:39:55.057924  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:39:54.771464  760583 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 01:39:54.771488  760583 machine.go:96] duration metric: took 4.78263821s to provisionDockerMachine
	I0927 01:39:54.771500  760583 start.go:293] postStartSetup for "no-preload-874305" (driver="docker")
	I0927 01:39:54.771512  760583 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 01:39:54.771592  760583 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 01:39:54.771632  760583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-874305
	I0927 01:39:54.793178  760583 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33796 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/no-preload-874305/id_rsa Username:docker}
	I0927 01:39:54.888492  760583 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 01:39:54.891895  760583 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0927 01:39:54.891931  760583 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0927 01:39:54.891943  760583 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0927 01:39:54.891950  760583 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0927 01:39:54.891962  760583 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-553751/.minikube/addons for local assets ...
	I0927 01:39:54.892028  760583 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-553751/.minikube/files for local assets ...
	I0927 01:39:54.892126  760583 filesync.go:149] local asset: /home/jenkins/minikube-integration/19711-553751/.minikube/files/etc/ssl/certs/5591582.pem -> 5591582.pem in /etc/ssl/certs
	I0927 01:39:54.892237  760583 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 01:39:54.900866  760583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/files/etc/ssl/certs/5591582.pem --> /etc/ssl/certs/5591582.pem (1708 bytes)
	I0927 01:39:54.926153  760583 start.go:296] duration metric: took 154.6362ms for postStartSetup
	I0927 01:39:54.926237  760583 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 01:39:54.926281  760583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-874305
	I0927 01:39:54.944842  760583 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33796 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/no-preload-874305/id_rsa Username:docker}
	I0927 01:39:55.040307  760583 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0927 01:39:55.045006  760583 fix.go:56] duration metric: took 5.405178101s for fixHost
	I0927 01:39:55.045073  760583 start.go:83] releasing machines lock for "no-preload-874305", held for 5.405270054s
	I0927 01:39:55.045180  760583 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-874305
	I0927 01:39:55.064151  760583 ssh_runner.go:195] Run: cat /version.json
	I0927 01:39:55.064188  760583 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 01:39:55.064209  760583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-874305
	I0927 01:39:55.064264  760583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-874305
	I0927 01:39:55.085756  760583 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33796 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/no-preload-874305/id_rsa Username:docker}
	I0927 01:39:55.093350  760583 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33796 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/no-preload-874305/id_rsa Username:docker}
	I0927 01:39:55.321864  760583 ssh_runner.go:195] Run: systemctl --version
	I0927 01:39:55.326319  760583 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 01:39:55.478214  760583 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0927 01:39:55.482895  760583 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 01:39:55.491833  760583 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0927 01:39:55.491913  760583 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 01:39:55.500841  760583 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0927 01:39:55.500865  760583 start.go:495] detecting cgroup driver to use...
	I0927 01:39:55.500900  760583 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0927 01:39:55.500948  760583 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 01:39:55.513081  760583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 01:39:55.524877  760583 docker.go:217] disabling cri-docker service (if available) ...
	I0927 01:39:55.524949  760583 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 01:39:55.538279  760583 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 01:39:55.549809  760583 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 01:39:55.634112  760583 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 01:39:55.725593  760583 docker.go:233] disabling docker service ...
	I0927 01:39:55.725708  760583 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 01:39:55.738427  760583 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 01:39:55.751955  760583 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 01:39:55.833826  760583 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 01:39:55.918472  760583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 01:39:55.929945  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 01:39:55.947883  760583 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 01:39:55.948003  760583 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:39:55.957585  760583 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 01:39:55.957697  760583 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:39:55.967782  760583 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:39:55.979683  760583 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:39:55.991131  760583 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 01:39:56.000969  760583 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:39:56.012291  760583 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:39:56.023767  760583 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 01:39:56.033765  760583 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 01:39:56.042982  760583 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 01:39:56.051481  760583 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:39:56.137584  760583 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 01:39:56.248720  760583 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 01:39:56.248824  760583 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 01:39:56.252650  760583 start.go:563] Will wait 60s for crictl version
	I0927 01:39:56.252755  760583 ssh_runner.go:195] Run: which crictl
	I0927 01:39:56.256204  760583 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 01:39:56.298432  760583 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0927 01:39:56.298522  760583 ssh_runner.go:195] Run: crio --version
	I0927 01:39:56.345137  760583 ssh_runner.go:195] Run: crio --version
	I0927 01:39:56.396102  760583 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0927 01:39:56.398840  760583 cli_runner.go:164] Run: docker network inspect no-preload-874305 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0927 01:39:56.414688  760583 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0927 01:39:56.418522  760583 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:39:56.429052  760583 kubeadm.go:883] updating cluster {Name:no-preload-874305 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-874305 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikub
e-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 01:39:56.429182  760583 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 01:39:56.429224  760583 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 01:39:56.474437  760583 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 01:39:56.474463  760583 cache_images.go:84] Images are preloaded, skipping loading
	I0927 01:39:56.474471  760583 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.31.1 crio true true} ...
	I0927 01:39:56.474567  760583 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-874305 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-874305 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 01:39:56.474665  760583 ssh_runner.go:195] Run: crio config
	I0927 01:39:56.529758  760583 cni.go:84] Creating CNI manager for ""
	I0927 01:39:56.529788  760583 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0927 01:39:56.529801  760583 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 01:39:56.529832  760583 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-874305 NodeName:no-preload-874305 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 01:39:56.530023  760583 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-874305"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 01:39:56.530108  760583 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 01:39:56.543782  760583 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 01:39:56.543886  760583 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 01:39:56.555249  760583 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0927 01:39:56.576584  760583 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 01:39:56.595680  760583 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0927 01:39:56.614059  760583 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0927 01:39:56.618127  760583 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 01:39:56.629401  760583 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:39:56.718251  760583 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:39:56.732709  760583 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/no-preload-874305 for IP: 192.168.85.2
	I0927 01:39:56.732728  760583 certs.go:194] generating shared ca certs ...
	I0927 01:39:56.732752  760583 certs.go:226] acquiring lock for ca certs: {Name:mkd73b356b28d0818fea73c44481b0cb2597afbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:39:56.732903  760583 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-553751/.minikube/ca.key
	I0927 01:39:56.732956  760583 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-553751/.minikube/proxy-client-ca.key
	I0927 01:39:56.732969  760583 certs.go:256] generating profile certs ...
	I0927 01:39:56.733077  760583 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/no-preload-874305/client.key
	I0927 01:39:56.733150  760583 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/no-preload-874305/apiserver.key.ab45eab7
	I0927 01:39:56.733198  760583 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/no-preload-874305/proxy-client.key
	I0927 01:39:56.733309  760583 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-553751/.minikube/certs/559158.pem (1338 bytes)
	W0927 01:39:56.733341  760583 certs.go:480] ignoring /home/jenkins/minikube-integration/19711-553751/.minikube/certs/559158_empty.pem, impossibly tiny 0 bytes
	I0927 01:39:56.733354  760583 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-553751/.minikube/certs/ca-key.pem (1679 bytes)
	I0927 01:39:56.733381  760583 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-553751/.minikube/certs/ca.pem (1078 bytes)
	I0927 01:39:56.733409  760583 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-553751/.minikube/certs/cert.pem (1123 bytes)
	I0927 01:39:56.733434  760583 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-553751/.minikube/certs/key.pem (1675 bytes)
	I0927 01:39:56.733482  760583 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-553751/.minikube/files/etc/ssl/certs/5591582.pem (1708 bytes)
	I0927 01:39:56.734177  760583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 01:39:56.759725  760583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0927 01:39:56.788950  760583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 01:39:56.818885  760583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0927 01:39:56.882678  760583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/no-preload-874305/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0927 01:39:56.919847  760583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/no-preload-874305/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 01:39:56.948201  760583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/no-preload-874305/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 01:39:56.974025  760583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/no-preload-874305/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0927 01:39:56.999723  760583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 01:39:57.024674  760583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/certs/559158.pem --> /usr/share/ca-certificates/559158.pem (1338 bytes)
	I0927 01:39:57.049437  760583 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19711-553751/.minikube/files/etc/ssl/certs/5591582.pem --> /usr/share/ca-certificates/5591582.pem (1708 bytes)
	I0927 01:39:57.078543  760583 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 01:39:57.096926  760583 ssh_runner.go:195] Run: openssl version
	I0927 01:39:57.103841  760583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 01:39:57.113845  760583 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:39:57.117694  760583 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 00:34 /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:39:57.117756  760583 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 01:39:57.124776  760583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 01:39:57.133927  760583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/559158.pem && ln -fs /usr/share/ca-certificates/559158.pem /etc/ssl/certs/559158.pem"
	I0927 01:39:57.143005  760583 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/559158.pem
	I0927 01:39:57.146615  760583 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 00:53 /usr/share/ca-certificates/559158.pem
	I0927 01:39:57.146819  760583 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/559158.pem
	I0927 01:39:57.153745  760583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/559158.pem /etc/ssl/certs/51391683.0"
	I0927 01:39:57.162658  760583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5591582.pem && ln -fs /usr/share/ca-certificates/5591582.pem /etc/ssl/certs/5591582.pem"
	I0927 01:39:57.172211  760583 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5591582.pem
	I0927 01:39:57.175809  760583 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 00:53 /usr/share/ca-certificates/5591582.pem
	I0927 01:39:57.175924  760583 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5591582.pem
	I0927 01:39:57.182775  760583 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5591582.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 01:39:57.191625  760583 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 01:39:57.195387  760583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0927 01:39:57.202285  760583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0927 01:39:57.209269  760583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0927 01:39:57.216223  760583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0927 01:39:57.223127  760583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0927 01:39:57.229901  760583 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0927 01:39:57.236817  760583 kubeadm.go:392] StartCluster: {Name:no-preload-874305 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-874305 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 01:39:57.236919  760583 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 01:39:57.237018  760583 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 01:39:57.282168  760583 cri.go:89] found id: ""
	I0927 01:39:57.282240  760583 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 01:39:57.296589  760583 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0927 01:39:57.296608  760583 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0927 01:39:57.296659  760583 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0927 01:39:57.308454  760583 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0927 01:39:57.309065  760583 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-874305" does not appear in /home/jenkins/minikube-integration/19711-553751/kubeconfig
	I0927 01:39:57.309380  760583 kubeconfig.go:62] /home/jenkins/minikube-integration/19711-553751/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-874305" cluster setting kubeconfig missing "no-preload-874305" context setting]
	I0927 01:39:57.309817  760583 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-553751/kubeconfig: {Name:mkc30ade55bf966f83b95c0af3746bfadfd3f379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:39:57.311292  760583 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0927 01:39:57.323780  760583 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
	I0927 01:39:57.323812  760583 kubeadm.go:597] duration metric: took 27.197867ms to restartPrimaryControlPlane
	I0927 01:39:57.323821  760583 kubeadm.go:394] duration metric: took 87.013994ms to StartCluster
	I0927 01:39:57.323837  760583 settings.go:142] acquiring lock: {Name:mk5b1f005001018637d448709269193603885722 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:39:57.323900  760583 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19711-553751/kubeconfig
	I0927 01:39:57.324880  760583 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-553751/kubeconfig: {Name:mkc30ade55bf966f83b95c0af3746bfadfd3f379 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 01:39:57.325061  760583 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 01:39:57.325407  760583 config.go:182] Loaded profile config "no-preload-874305": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:39:57.325453  760583 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0927 01:39:57.325570  760583 addons.go:69] Setting storage-provisioner=true in profile "no-preload-874305"
	I0927 01:39:57.325595  760583 addons.go:234] Setting addon storage-provisioner=true in "no-preload-874305"
	W0927 01:39:57.325604  760583 addons.go:243] addon storage-provisioner should already be in state true
	I0927 01:39:57.325626  760583 host.go:66] Checking if "no-preload-874305" exists ...
	I0927 01:39:57.326172  760583 cli_runner.go:164] Run: docker container inspect no-preload-874305 --format={{.State.Status}}
	I0927 01:39:57.326316  760583 addons.go:69] Setting default-storageclass=true in profile "no-preload-874305"
	I0927 01:39:57.326340  760583 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-874305"
	I0927 01:39:57.326585  760583 cli_runner.go:164] Run: docker container inspect no-preload-874305 --format={{.State.Status}}
	I0927 01:39:57.326652  760583 addons.go:69] Setting metrics-server=true in profile "no-preload-874305"
	I0927 01:39:57.326668  760583 addons.go:234] Setting addon metrics-server=true in "no-preload-874305"
	W0927 01:39:57.326674  760583 addons.go:243] addon metrics-server should already be in state true
	I0927 01:39:57.326699  760583 host.go:66] Checking if "no-preload-874305" exists ...
	I0927 01:39:57.327101  760583 cli_runner.go:164] Run: docker container inspect no-preload-874305 --format={{.State.Status}}
	I0927 01:39:57.327831  760583 addons.go:69] Setting dashboard=true in profile "no-preload-874305"
	I0927 01:39:57.327856  760583 addons.go:234] Setting addon dashboard=true in "no-preload-874305"
	W0927 01:39:57.327864  760583 addons.go:243] addon dashboard should already be in state true
	I0927 01:39:57.327886  760583 host.go:66] Checking if "no-preload-874305" exists ...
	I0927 01:39:57.328284  760583 cli_runner.go:164] Run: docker container inspect no-preload-874305 --format={{.State.Status}}
	I0927 01:39:57.331666  760583 out.go:177] * Verifying Kubernetes components...
	I0927 01:39:57.334843  760583 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 01:39:57.385337  760583 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 01:39:57.388522  760583 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:39:57.388545  760583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 01:39:57.388608  760583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-874305
	I0927 01:39:57.395471  760583 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0927 01:39:57.395531  760583 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0927 01:39:57.398896  760583 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0927 01:39:57.398954  760583 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0927 01:39:57.398971  760583 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0927 01:39:57.399040  760583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-874305
	I0927 01:39:57.402079  760583 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0927 01:39:57.402104  760583 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0927 01:39:57.402179  760583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-874305
	I0927 01:39:57.408670  760583 addons.go:234] Setting addon default-storageclass=true in "no-preload-874305"
	W0927 01:39:57.408692  760583 addons.go:243] addon default-storageclass should already be in state true
	I0927 01:39:57.408719  760583 host.go:66] Checking if "no-preload-874305" exists ...
	I0927 01:39:57.409316  760583 cli_runner.go:164] Run: docker container inspect no-preload-874305 --format={{.State.Status}}
	I0927 01:39:57.459027  760583 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33796 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/no-preload-874305/id_rsa Username:docker}
	I0927 01:39:57.488269  760583 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33796 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/no-preload-874305/id_rsa Username:docker}
	I0927 01:39:57.488402  760583 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33796 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/no-preload-874305/id_rsa Username:docker}
	I0927 01:39:57.489371  760583 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 01:39:57.489402  760583 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 01:39:57.489462  760583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-874305
	I0927 01:39:57.518382  760583 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33796 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/no-preload-874305/id_rsa Username:docker}
	I0927 01:39:57.716234  760583 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0927 01:39:57.716305  760583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0927 01:39:57.775954  760583 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 01:39:57.780434  760583 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 01:39:57.803319  760583 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0927 01:39:57.803346  760583 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0927 01:39:57.807662  760583 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:39:57.823230  760583 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0927 01:39:57.823258  760583 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0927 01:39:57.847328  760583 node_ready.go:35] waiting up to 6m0s for node "no-preload-874305" to be "Ready" ...
	I0927 01:39:57.877088  760583 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 01:39:57.877154  760583 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0927 01:39:57.890050  760583 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0927 01:39:57.890117  760583 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0927 01:39:57.970531  760583 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 01:39:57.976239  760583 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0927 01:39:57.976307  760583 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0927 01:39:58.027457  760583 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0927 01:39:58.027535  760583 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0927 01:39:58.071576  760583 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0927 01:39:58.071678  760583 retry.go:31] will retry after 307.31133ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0927 01:39:58.103084  760583 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0927 01:39:58.103171  760583 retry.go:31] will retry after 126.750889ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0927 01:39:58.110010  760583 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0927 01:39:58.110085  760583 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0927 01:39:58.136581  760583 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0927 01:39:58.136666  760583 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W0927 01:39:58.187995  760583 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0927 01:39:58.188078  760583 retry.go:31] will retry after 266.894234ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0927 01:39:58.189743  760583 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0927 01:39:58.189802  760583 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0927 01:39:58.227937  760583 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0927 01:39:58.228010  760583 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0927 01:39:58.230178  760583 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:39:58.263546  760583 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0927 01:39:58.263634  760583 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0927 01:39:58.303789  760583 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0927 01:39:58.359130  760583 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0927 01:39:58.359219  760583 retry.go:31] will retry after 231.162228ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0927 01:39:58.379431  760583 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0927 01:39:58.455850  760583 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 01:39:58.590529  760583 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 01:39:57.059910  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:39:59.557115  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:03.111742  760583 node_ready.go:49] node "no-preload-874305" has status "Ready":"True"
	I0927 01:40:03.111766  760583 node_ready.go:38] duration metric: took 5.264368061s for node "no-preload-874305" to be "Ready" ...
	I0927 01:40:03.111776  760583 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:40:03.533703  760583 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5wdrw" in "kube-system" namespace to be "Ready" ...
	I0927 01:40:03.755707  760583 pod_ready.go:93] pod "coredns-7c65d6cfc9-5wdrw" in "kube-system" namespace has status "Ready":"True"
	I0927 01:40:03.755733  760583 pod_ready.go:82] duration metric: took 221.988714ms for pod "coredns-7c65d6cfc9-5wdrw" in "kube-system" namespace to be "Ready" ...
	I0927 01:40:03.755748  760583 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-874305" in "kube-system" namespace to be "Ready" ...
	I0927 01:40:03.829904  760583 pod_ready.go:93] pod "etcd-no-preload-874305" in "kube-system" namespace has status "Ready":"True"
	I0927 01:40:03.829975  760583 pod_ready.go:82] duration metric: took 74.21889ms for pod "etcd-no-preload-874305" in "kube-system" namespace to be "Ready" ...
	I0927 01:40:03.830005  760583 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-874305" in "kube-system" namespace to be "Ready" ...
	I0927 01:40:03.899683  760583 pod_ready.go:93] pod "kube-apiserver-no-preload-874305" in "kube-system" namespace has status "Ready":"True"
	I0927 01:40:03.899762  760583 pod_ready.go:82] duration metric: took 69.735167ms for pod "kube-apiserver-no-preload-874305" in "kube-system" namespace to be "Ready" ...
	I0927 01:40:03.899790  760583 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-874305" in "kube-system" namespace to be "Ready" ...
	I0927 01:40:03.970570  760583 pod_ready.go:93] pod "kube-controller-manager-no-preload-874305" in "kube-system" namespace has status "Ready":"True"
	I0927 01:40:03.970642  760583 pod_ready.go:82] duration metric: took 70.815389ms for pod "kube-controller-manager-no-preload-874305" in "kube-system" namespace to be "Ready" ...
	I0927 01:40:03.970670  760583 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mghm9" in "kube-system" namespace to be "Ready" ...
	I0927 01:40:04.035941  760583 pod_ready.go:93] pod "kube-proxy-mghm9" in "kube-system" namespace has status "Ready":"True"
	I0927 01:40:04.036019  760583 pod_ready.go:82] duration metric: took 65.326158ms for pod "kube-proxy-mghm9" in "kube-system" namespace to be "Ready" ...
	I0927 01:40:04.036046  760583 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-874305" in "kube-system" namespace to be "Ready" ...
	I0927 01:40:04.237971  760583 pod_ready.go:93] pod "kube-scheduler-no-preload-874305" in "kube-system" namespace has status "Ready":"True"
	I0927 01:40:04.238046  760583 pod_ready.go:82] duration metric: took 201.977697ms for pod "kube-scheduler-no-preload-874305" in "kube-system" namespace to be "Ready" ...
	I0927 01:40:04.238074  760583 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace to be "Ready" ...
	I0927 01:40:05.251498  760583 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.947609147s)
	I0927 01:40:05.251728  760583 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (6.872217876s)
	I0927 01:40:05.251836  760583 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.795898035s)
	I0927 01:40:05.251987  760583 addons.go:475] Verifying addon metrics-server=true in "no-preload-874305"
	I0927 01:40:05.251884  760583 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.66127771s)
	I0927 01:40:05.254631  760583 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-874305 addons enable metrics-server
	
	I0927 01:40:05.268795  760583 out.go:177] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	I0927 01:40:01.557789  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:03.559923  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:05.271504  760583 addons.go:510] duration metric: took 7.946040441s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I0927 01:40:06.245246  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:08.246668  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:06.056636  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:08.057346  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:10.556242  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:10.247861  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:12.253586  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:12.556352  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:14.556505  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:14.745119  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:16.747653  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:19.249117  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:16.558035  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:19.057804  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:21.743856  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:23.745910  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:21.556571  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:23.556806  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:26.244896  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:28.744160  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:26.056770  756367 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:27.057249  756367 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace has status "Ready":"True"
	I0927 01:40:27.057275  756367 pod_ready.go:82] duration metric: took 1m20.007171419s for pod "kube-scheduler-old-k8s-version-745133" in "kube-system" namespace to be "Ready" ...
	I0927 01:40:27.057288  756367 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace to be "Ready" ...
	I0927 01:40:29.065733  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:31.244299  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:33.244963  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:31.562875  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:33.563455  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:35.744408  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:37.744641  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:36.066295  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:38.563041  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:40.563941  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:39.744709  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:42.244207  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:44.244259  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:43.064533  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:45.064689  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:46.744343  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:48.744706  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:47.563788  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:50.064619  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:51.244855  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:53.245125  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:52.563513  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:55.063431  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:55.744613  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:58.244399  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:40:57.566019  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:00.066276  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:00.245381  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:02.744575  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:02.563641  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:05.064215  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:05.245319  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:07.744824  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:07.065805  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:09.567334  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:10.244547  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:12.744331  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:12.063975  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:14.565667  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:14.744392  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:16.744766  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:19.244874  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:17.063897  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:19.563290  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:21.744416  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:23.744460  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:22.064314  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:24.563394  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:26.244482  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:28.744652  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:27.062663  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:29.063595  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:30.744835  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:33.245432  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:31.562923  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:33.562977  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:35.744065  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:37.748345  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:36.063772  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:38.564170  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:40.244648  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:42.245802  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:41.063198  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:43.063908  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:45.065383  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:44.744310  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:47.244995  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:49.245250  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:47.562925  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:49.564236  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:51.743817  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:54.245028  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:52.063522  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:54.064108  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:56.744679  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:59.243704  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:56.563250  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:41:58.564317  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:01.248635  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:03.744172  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:01.063874  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:03.562812  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:05.563181  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:06.243849  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:08.244123  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:07.563458  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:09.568052  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:10.744407  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:13.244608  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:12.064418  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:14.563145  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:15.744169  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:17.744755  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:17.064115  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:19.562933  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:20.245386  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:22.744832  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:21.563168  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:24.063568  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:25.244597  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:27.743578  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:26.064593  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:28.562897  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:30.562960  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:29.743961  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:31.744079  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:34.243858  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:32.564141  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:35.063810  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:36.244504  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:38.744787  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:37.563916  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:40.063850  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:41.244100  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:43.244511  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:42.562665  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:44.563377  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:45.244693  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:47.743863  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:46.563810  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:49.064093  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:49.743914  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:52.243861  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:54.244684  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:51.563284  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:53.563544  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:56.744390  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:59.244065  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:56.063585  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:42:58.068026  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:00.088088  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:01.245202  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:03.247924  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:02.124730  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:04.563544  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:05.744349  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:08.243974  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:06.564100  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:09.064077  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:10.244981  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:12.743877  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:11.064151  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:13.563466  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:14.745038  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:17.244194  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:16.063615  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:18.066965  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:20.069711  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:19.743841  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:21.744247  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:23.744326  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:22.563100  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:24.563528  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:26.244122  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:28.744658  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:27.063842  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:29.064098  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:31.244349  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:33.245189  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:31.064496  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:33.564366  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:35.743546  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:37.744304  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:36.063123  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:38.063760  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:40.063959  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:39.744382  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:42.244088  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:44.244452  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:42.564001  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:44.564177  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:46.744823  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:49.244570  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:47.063513  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:49.063777  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:51.744458  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:53.744487  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:51.064340  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:53.064600  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:55.564332  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:56.245083  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:58.745184  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:43:58.064170  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:00.065226  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:01.244567  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:03.744577  760583 pod_ready.go:103] pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:04.244654  760583 pod_ready.go:82] duration metric: took 4m0.006552829s for pod "metrics-server-6867b74b74-t5lkb" in "kube-system" namespace to be "Ready" ...
	E0927 01:44:04.244682  760583 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0927 01:44:04.244692  760583 pod_ready.go:39] duration metric: took 4m1.132904912s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:44:04.244706  760583 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:44:04.244735  760583 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:04.244809  760583 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:04.286578  760583 cri.go:89] found id: "09dcb3356c6a19c4a374fc865a3369660b39e464338f69ef2b075183fc110f3e"
	I0927 01:44:04.286604  760583 cri.go:89] found id: ""
	I0927 01:44:04.286613  760583 logs.go:276] 1 containers: [09dcb3356c6a19c4a374fc865a3369660b39e464338f69ef2b075183fc110f3e]
	I0927 01:44:04.286670  760583 ssh_runner.go:195] Run: which crictl
	I0927 01:44:04.290915  760583 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:04.290994  760583 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:04.331199  760583 cri.go:89] found id: "ba6bd8e0809785f219e889100450616432c1d9c8273b84573fff2275adc662c7"
	I0927 01:44:04.331224  760583 cri.go:89] found id: ""
	I0927 01:44:04.331232  760583 logs.go:276] 1 containers: [ba6bd8e0809785f219e889100450616432c1d9c8273b84573fff2275adc662c7]
	I0927 01:44:04.331291  760583 ssh_runner.go:195] Run: which crictl
	I0927 01:44:04.334983  760583 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:04.335084  760583 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:04.381678  760583 cri.go:89] found id: "0a08936d04837493566357d0f9064f7ab2d4dd05c9200a34d3a391c374e5d142"
	I0927 01:44:04.381702  760583 cri.go:89] found id: ""
	I0927 01:44:04.381721  760583 logs.go:276] 1 containers: [0a08936d04837493566357d0f9064f7ab2d4dd05c9200a34d3a391c374e5d142]
	I0927 01:44:04.381778  760583 ssh_runner.go:195] Run: which crictl
	I0927 01:44:04.385542  760583 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:04.385618  760583 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:04.425076  760583 cri.go:89] found id: "4e2c243d3f202b9ba2c3ed3fc36004a1a354f525c1af9a3a18e0e3869db55b31"
	I0927 01:44:04.425097  760583 cri.go:89] found id: ""
	I0927 01:44:04.425105  760583 logs.go:276] 1 containers: [4e2c243d3f202b9ba2c3ed3fc36004a1a354f525c1af9a3a18e0e3869db55b31]
	I0927 01:44:04.425198  760583 ssh_runner.go:195] Run: which crictl
	I0927 01:44:04.429023  760583 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:04.429099  760583 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:02.563727  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:04.564509  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:04.495367  760583 cri.go:89] found id: "97f3f29a6087ab159457c975afe52b5df679696e2540a7475dd07cdd566a6421"
	I0927 01:44:04.495390  760583 cri.go:89] found id: ""
	I0927 01:44:04.495398  760583 logs.go:276] 1 containers: [97f3f29a6087ab159457c975afe52b5df679696e2540a7475dd07cdd566a6421]
	I0927 01:44:04.495457  760583 ssh_runner.go:195] Run: which crictl
	I0927 01:44:04.499012  760583 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:04.499128  760583 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:04.548409  760583 cri.go:89] found id: "85064458a613d60c53b3ae0b7ee87b46ab9bf4d9c8e7335526ec77fdd90da0a1"
	I0927 01:44:04.548441  760583 cri.go:89] found id: ""
	I0927 01:44:04.548449  760583 logs.go:276] 1 containers: [85064458a613d60c53b3ae0b7ee87b46ab9bf4d9c8e7335526ec77fdd90da0a1]
	I0927 01:44:04.548508  760583 ssh_runner.go:195] Run: which crictl
	I0927 01:44:04.552448  760583 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:04.552524  760583 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:04.596013  760583 cri.go:89] found id: "af121b47fb0451e00928c37c7ea5dbdc34876fc9c3f26bca2c8ccc33724b472f"
	I0927 01:44:04.596078  760583 cri.go:89] found id: ""
	I0927 01:44:04.596101  760583 logs.go:276] 1 containers: [af121b47fb0451e00928c37c7ea5dbdc34876fc9c3f26bca2c8ccc33724b472f]
	I0927 01:44:04.596167  760583 ssh_runner.go:195] Run: which crictl
	I0927 01:44:04.599905  760583 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:04.599987  760583 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:04.644039  760583 cri.go:89] found id: "4ed220057618f5e67044311ba1ef3a6e29470c322670689f1ccd2851aa044567"
	I0927 01:44:04.644115  760583 cri.go:89] found id: ""
	I0927 01:44:04.644131  760583 logs.go:276] 1 containers: [4ed220057618f5e67044311ba1ef3a6e29470c322670689f1ccd2851aa044567]
	I0927 01:44:04.644191  760583 ssh_runner.go:195] Run: which crictl
	I0927 01:44:04.648048  760583 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0927 01:44:04.648125  760583 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0927 01:44:04.695451  760583 cri.go:89] found id: "81dfc52c2caedc6ccc568a3c88ce90a33e16b5fa1ad78c1ab5aacb0f6c1e296f"
	I0927 01:44:04.695473  760583 cri.go:89] found id: "4fcb5b5f061ff6b3956e777a227057c550db431ed0232f4f4c4ea746b3f08257"
	I0927 01:44:04.695478  760583 cri.go:89] found id: ""
	I0927 01:44:04.695486  760583 logs.go:276] 2 containers: [81dfc52c2caedc6ccc568a3c88ce90a33e16b5fa1ad78c1ab5aacb0f6c1e296f 4fcb5b5f061ff6b3956e777a227057c550db431ed0232f4f4c4ea746b3f08257]
	I0927 01:44:04.695544  760583 ssh_runner.go:195] Run: which crictl
	I0927 01:44:04.699183  760583 ssh_runner.go:195] Run: which crictl
	I0927 01:44:04.702594  760583 logs.go:123] Gathering logs for storage-provisioner [81dfc52c2caedc6ccc568a3c88ce90a33e16b5fa1ad78c1ab5aacb0f6c1e296f] ...
	I0927 01:44:04.702619  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81dfc52c2caedc6ccc568a3c88ce90a33e16b5fa1ad78c1ab5aacb0f6c1e296f"
	I0927 01:44:04.745458  760583 logs.go:123] Gathering logs for storage-provisioner [4fcb5b5f061ff6b3956e777a227057c550db431ed0232f4f4c4ea746b3f08257] ...
	I0927 01:44:04.745492  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4fcb5b5f061ff6b3956e777a227057c550db431ed0232f4f4c4ea746b3f08257"
	I0927 01:44:04.787899  760583 logs.go:123] Gathering logs for coredns [0a08936d04837493566357d0f9064f7ab2d4dd05c9200a34d3a391c374e5d142] ...
	I0927 01:44:04.787925  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a08936d04837493566357d0f9064f7ab2d4dd05c9200a34d3a391c374e5d142"
	I0927 01:44:04.832075  760583 logs.go:123] Gathering logs for kube-scheduler [4e2c243d3f202b9ba2c3ed3fc36004a1a354f525c1af9a3a18e0e3869db55b31] ...
	I0927 01:44:04.832105  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e2c243d3f202b9ba2c3ed3fc36004a1a354f525c1af9a3a18e0e3869db55b31"
	I0927 01:44:04.882054  760583 logs.go:123] Gathering logs for kubernetes-dashboard [4ed220057618f5e67044311ba1ef3a6e29470c322670689f1ccd2851aa044567] ...
	I0927 01:44:04.882085  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ed220057618f5e67044311ba1ef3a6e29470c322670689f1ccd2851aa044567"
	I0927 01:44:04.920205  760583 logs.go:123] Gathering logs for etcd [ba6bd8e0809785f219e889100450616432c1d9c8273b84573fff2275adc662c7] ...
	I0927 01:44:04.920235  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba6bd8e0809785f219e889100450616432c1d9c8273b84573fff2275adc662c7"
	I0927 01:44:04.971251  760583 logs.go:123] Gathering logs for kube-proxy [97f3f29a6087ab159457c975afe52b5df679696e2540a7475dd07cdd566a6421] ...
	I0927 01:44:04.971284  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 97f3f29a6087ab159457c975afe52b5df679696e2540a7475dd07cdd566a6421"
	I0927 01:44:05.013337  760583 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:05.013365  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:05.092979  760583 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:05.093018  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:05.111247  760583 logs.go:123] Gathering logs for kube-apiserver [09dcb3356c6a19c4a374fc865a3369660b39e464338f69ef2b075183fc110f3e] ...
	I0927 01:44:05.111276  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09dcb3356c6a19c4a374fc865a3369660b39e464338f69ef2b075183fc110f3e"
	I0927 01:44:05.175272  760583 logs.go:123] Gathering logs for kube-controller-manager [85064458a613d60c53b3ae0b7ee87b46ab9bf4d9c8e7335526ec77fdd90da0a1] ...
	I0927 01:44:05.175306  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85064458a613d60c53b3ae0b7ee87b46ab9bf4d9c8e7335526ec77fdd90da0a1"
	I0927 01:44:05.254903  760583 logs.go:123] Gathering logs for kindnet [af121b47fb0451e00928c37c7ea5dbdc34876fc9c3f26bca2c8ccc33724b472f] ...
	I0927 01:44:05.254980  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af121b47fb0451e00928c37c7ea5dbdc34876fc9c3f26bca2c8ccc33724b472f"
	I0927 01:44:05.297002  760583 logs.go:123] Gathering logs for container status ...
	I0927 01:44:05.297034  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:05.344308  760583 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:05.344335  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 01:44:05.395357  760583 logs.go:138] Found kubelet problem: Sep 27 01:40:09 no-preload-874305 kubelet[744]: W0927 01:40:09.065999     744 reflector.go:561] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-874305" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-874305' and this object
	W0927 01:44:05.395640  760583 logs.go:138] Found kubelet problem: Sep 27 01:40:09 no-preload-874305 kubelet[744]: E0927 01:40:09.066042     744 reflector.go:158] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-874305\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-874305' and this object" logger="UnhandledError"
	I0927 01:44:05.437023  760583 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:05.437052  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 01:44:05.580668  760583 out.go:358] Setting ErrFile to fd 2...
	I0927 01:44:05.580697  760583 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 01:44:05.580787  760583 out.go:270] X Problems detected in kubelet:
	W0927 01:44:05.580804  760583 out.go:270]   Sep 27 01:40:09 no-preload-874305 kubelet[744]: W0927 01:40:09.065999     744 reflector.go:561] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-874305" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-874305' and this object
	W0927 01:44:05.580811  760583 out.go:270]   Sep 27 01:40:09 no-preload-874305 kubelet[744]: E0927 01:40:09.066042     744 reflector.go:158] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-874305\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-874305' and this object" logger="UnhandledError"
	I0927 01:44:05.580829  760583 out.go:358] Setting ErrFile to fd 2...
	I0927 01:44:05.580835  760583 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:44:07.064530  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:09.563994  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:12.063999  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:14.563247  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:15.581701  760583 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:15.593283  760583 api_server.go:72] duration metric: took 4m18.268186516s to wait for apiserver process to appear ...
	I0927 01:44:15.593309  760583 api_server.go:88] waiting for apiserver healthz status ...
	I0927 01:44:15.593343  760583 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:15.593403  760583 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:15.632173  760583 cri.go:89] found id: "09dcb3356c6a19c4a374fc865a3369660b39e464338f69ef2b075183fc110f3e"
	I0927 01:44:15.632192  760583 cri.go:89] found id: ""
	I0927 01:44:15.632199  760583 logs.go:276] 1 containers: [09dcb3356c6a19c4a374fc865a3369660b39e464338f69ef2b075183fc110f3e]
	I0927 01:44:15.632257  760583 ssh_runner.go:195] Run: which crictl
	I0927 01:44:15.635866  760583 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:15.635934  760583 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:15.673475  760583 cri.go:89] found id: "ba6bd8e0809785f219e889100450616432c1d9c8273b84573fff2275adc662c7"
	I0927 01:44:15.673498  760583 cri.go:89] found id: ""
	I0927 01:44:15.673506  760583 logs.go:276] 1 containers: [ba6bd8e0809785f219e889100450616432c1d9c8273b84573fff2275adc662c7]
	I0927 01:44:15.673614  760583 ssh_runner.go:195] Run: which crictl
	I0927 01:44:15.678405  760583 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:15.678478  760583 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:15.717347  760583 cri.go:89] found id: "0a08936d04837493566357d0f9064f7ab2d4dd05c9200a34d3a391c374e5d142"
	I0927 01:44:15.717371  760583 cri.go:89] found id: ""
	I0927 01:44:15.717379  760583 logs.go:276] 1 containers: [0a08936d04837493566357d0f9064f7ab2d4dd05c9200a34d3a391c374e5d142]
	I0927 01:44:15.717438  760583 ssh_runner.go:195] Run: which crictl
	I0927 01:44:15.720866  760583 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:15.720934  760583 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:15.760518  760583 cri.go:89] found id: "4e2c243d3f202b9ba2c3ed3fc36004a1a354f525c1af9a3a18e0e3869db55b31"
	I0927 01:44:15.760538  760583 cri.go:89] found id: ""
	I0927 01:44:15.760546  760583 logs.go:276] 1 containers: [4e2c243d3f202b9ba2c3ed3fc36004a1a354f525c1af9a3a18e0e3869db55b31]
	I0927 01:44:15.760601  760583 ssh_runner.go:195] Run: which crictl
	I0927 01:44:15.763987  760583 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:15.764054  760583 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:15.805594  760583 cri.go:89] found id: "97f3f29a6087ab159457c975afe52b5df679696e2540a7475dd07cdd566a6421"
	I0927 01:44:15.805615  760583 cri.go:89] found id: ""
	I0927 01:44:15.805623  760583 logs.go:276] 1 containers: [97f3f29a6087ab159457c975afe52b5df679696e2540a7475dd07cdd566a6421]
	I0927 01:44:15.805681  760583 ssh_runner.go:195] Run: which crictl
	I0927 01:44:15.809312  760583 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:15.809380  760583 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:15.846913  760583 cri.go:89] found id: "85064458a613d60c53b3ae0b7ee87b46ab9bf4d9c8e7335526ec77fdd90da0a1"
	I0927 01:44:15.846937  760583 cri.go:89] found id: ""
	I0927 01:44:15.846946  760583 logs.go:276] 1 containers: [85064458a613d60c53b3ae0b7ee87b46ab9bf4d9c8e7335526ec77fdd90da0a1]
	I0927 01:44:15.847001  760583 ssh_runner.go:195] Run: which crictl
	I0927 01:44:15.850516  760583 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:15.850606  760583 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:15.886460  760583 cri.go:89] found id: "af121b47fb0451e00928c37c7ea5dbdc34876fc9c3f26bca2c8ccc33724b472f"
	I0927 01:44:15.886482  760583 cri.go:89] found id: ""
	I0927 01:44:15.886490  760583 logs.go:276] 1 containers: [af121b47fb0451e00928c37c7ea5dbdc34876fc9c3f26bca2c8ccc33724b472f]
	I0927 01:44:15.886559  760583 ssh_runner.go:195] Run: which crictl
	I0927 01:44:15.889921  760583 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:15.890005  760583 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:15.928084  760583 cri.go:89] found id: "4ed220057618f5e67044311ba1ef3a6e29470c322670689f1ccd2851aa044567"
	I0927 01:44:15.928105  760583 cri.go:89] found id: ""
	I0927 01:44:15.928113  760583 logs.go:276] 1 containers: [4ed220057618f5e67044311ba1ef3a6e29470c322670689f1ccd2851aa044567]
	I0927 01:44:15.928171  760583 ssh_runner.go:195] Run: which crictl
	I0927 01:44:15.931779  760583 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0927 01:44:15.931852  760583 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0927 01:44:15.987694  760583 cri.go:89] found id: "81dfc52c2caedc6ccc568a3c88ce90a33e16b5fa1ad78c1ab5aacb0f6c1e296f"
	I0927 01:44:15.987716  760583 cri.go:89] found id: "4fcb5b5f061ff6b3956e777a227057c550db431ed0232f4f4c4ea746b3f08257"
	I0927 01:44:15.987721  760583 cri.go:89] found id: ""
	I0927 01:44:15.987728  760583 logs.go:276] 2 containers: [81dfc52c2caedc6ccc568a3c88ce90a33e16b5fa1ad78c1ab5aacb0f6c1e296f 4fcb5b5f061ff6b3956e777a227057c550db431ed0232f4f4c4ea746b3f08257]
	I0927 01:44:15.987783  760583 ssh_runner.go:195] Run: which crictl
	I0927 01:44:15.991356  760583 ssh_runner.go:195] Run: which crictl
	I0927 01:44:15.995022  760583 logs.go:123] Gathering logs for kube-apiserver [09dcb3356c6a19c4a374fc865a3369660b39e464338f69ef2b075183fc110f3e] ...
	I0927 01:44:15.995048  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09dcb3356c6a19c4a374fc865a3369660b39e464338f69ef2b075183fc110f3e"
	I0927 01:44:16.049104  760583 logs.go:123] Gathering logs for kube-controller-manager [85064458a613d60c53b3ae0b7ee87b46ab9bf4d9c8e7335526ec77fdd90da0a1] ...
	I0927 01:44:16.049139  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85064458a613d60c53b3ae0b7ee87b46ab9bf4d9c8e7335526ec77fdd90da0a1"
	I0927 01:44:16.135902  760583 logs.go:123] Gathering logs for kindnet [af121b47fb0451e00928c37c7ea5dbdc34876fc9c3f26bca2c8ccc33724b472f] ...
	I0927 01:44:16.135934  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af121b47fb0451e00928c37c7ea5dbdc34876fc9c3f26bca2c8ccc33724b472f"
	I0927 01:44:16.179360  760583 logs.go:123] Gathering logs for storage-provisioner [4fcb5b5f061ff6b3956e777a227057c550db431ed0232f4f4c4ea746b3f08257] ...
	I0927 01:44:16.179390  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4fcb5b5f061ff6b3956e777a227057c550db431ed0232f4f4c4ea746b3f08257"
	I0927 01:44:16.215286  760583 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:16.215311  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:16.292979  760583 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:16.293017  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:16.309532  760583 logs.go:123] Gathering logs for etcd [ba6bd8e0809785f219e889100450616432c1d9c8273b84573fff2275adc662c7] ...
	I0927 01:44:16.309570  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba6bd8e0809785f219e889100450616432c1d9c8273b84573fff2275adc662c7"
	I0927 01:44:16.363047  760583 logs.go:123] Gathering logs for kube-scheduler [4e2c243d3f202b9ba2c3ed3fc36004a1a354f525c1af9a3a18e0e3869db55b31] ...
	I0927 01:44:16.363080  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e2c243d3f202b9ba2c3ed3fc36004a1a354f525c1af9a3a18e0e3869db55b31"
	I0927 01:44:16.424144  760583 logs.go:123] Gathering logs for kube-proxy [97f3f29a6087ab159457c975afe52b5df679696e2540a7475dd07cdd566a6421] ...
	I0927 01:44:16.424174  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 97f3f29a6087ab159457c975afe52b5df679696e2540a7475dd07cdd566a6421"
	I0927 01:44:16.467818  760583 logs.go:123] Gathering logs for coredns [0a08936d04837493566357d0f9064f7ab2d4dd05c9200a34d3a391c374e5d142] ...
	I0927 01:44:16.467896  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a08936d04837493566357d0f9064f7ab2d4dd05c9200a34d3a391c374e5d142"
	I0927 01:44:16.511519  760583 logs.go:123] Gathering logs for kubernetes-dashboard [4ed220057618f5e67044311ba1ef3a6e29470c322670689f1ccd2851aa044567] ...
	I0927 01:44:16.511547  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ed220057618f5e67044311ba1ef3a6e29470c322670689f1ccd2851aa044567"
	I0927 01:44:16.567161  760583 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:16.567187  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 01:44:16.613215  760583 logs.go:138] Found kubelet problem: Sep 27 01:40:09 no-preload-874305 kubelet[744]: W0927 01:40:09.065999     744 reflector.go:561] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-874305" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-874305' and this object
	W0927 01:44:16.613492  760583 logs.go:138] Found kubelet problem: Sep 27 01:40:09 no-preload-874305 kubelet[744]: E0927 01:40:09.066042     744 reflector.go:158] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-874305\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-874305' and this object" logger="UnhandledError"
	I0927 01:44:16.656889  760583 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:16.656927  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 01:44:16.779496  760583 logs.go:123] Gathering logs for storage-provisioner [81dfc52c2caedc6ccc568a3c88ce90a33e16b5fa1ad78c1ab5aacb0f6c1e296f] ...
	I0927 01:44:16.779525  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81dfc52c2caedc6ccc568a3c88ce90a33e16b5fa1ad78c1ab5aacb0f6c1e296f"
	I0927 01:44:16.823855  760583 logs.go:123] Gathering logs for container status ...
	I0927 01:44:16.823887  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:16.879353  760583 out.go:358] Setting ErrFile to fd 2...
	I0927 01:44:16.879378  760583 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 01:44:16.879458  760583 out.go:270] X Problems detected in kubelet:
	W0927 01:44:16.879470  760583 out.go:270]   Sep 27 01:40:09 no-preload-874305 kubelet[744]: W0927 01:40:09.065999     744 reflector.go:561] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-874305" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-874305' and this object
	W0927 01:44:16.879477  760583 out.go:270]   Sep 27 01:40:09 no-preload-874305 kubelet[744]: E0927 01:40:09.066042     744 reflector.go:158] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-874305\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-874305' and this object" logger="UnhandledError"
	I0927 01:44:16.879597  760583 out.go:358] Setting ErrFile to fd 2...
	I0927 01:44:16.879606  760583 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:44:16.589478  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:19.064551  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:21.563101  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:23.563795  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:25.564112  756367 pod_ready.go:103] pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace has status "Ready":"False"
	I0927 01:44:26.880500  760583 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0927 01:44:26.888960  760583 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0927 01:44:26.889921  760583 api_server.go:141] control plane version: v1.31.1
	I0927 01:44:26.889943  760583 api_server.go:131] duration metric: took 11.296627418s to wait for apiserver health ...
	I0927 01:44:26.889952  760583 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 01:44:26.889973  760583 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:26.890037  760583 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:26.928056  760583 cri.go:89] found id: "09dcb3356c6a19c4a374fc865a3369660b39e464338f69ef2b075183fc110f3e"
	I0927 01:44:26.928075  760583 cri.go:89] found id: ""
	I0927 01:44:26.928082  760583 logs.go:276] 1 containers: [09dcb3356c6a19c4a374fc865a3369660b39e464338f69ef2b075183fc110f3e]
	I0927 01:44:26.928139  760583 ssh_runner.go:195] Run: which crictl
	I0927 01:44:26.932211  760583 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:26.932281  760583 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:26.983808  760583 cri.go:89] found id: "ba6bd8e0809785f219e889100450616432c1d9c8273b84573fff2275adc662c7"
	I0927 01:44:26.983826  760583 cri.go:89] found id: ""
	I0927 01:44:26.983834  760583 logs.go:276] 1 containers: [ba6bd8e0809785f219e889100450616432c1d9c8273b84573fff2275adc662c7]
	I0927 01:44:26.983889  760583 ssh_runner.go:195] Run: which crictl
	I0927 01:44:26.987698  760583 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:26.987775  760583 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:27.026072  760583 cri.go:89] found id: "0a08936d04837493566357d0f9064f7ab2d4dd05c9200a34d3a391c374e5d142"
	I0927 01:44:27.026096  760583 cri.go:89] found id: ""
	I0927 01:44:27.026105  760583 logs.go:276] 1 containers: [0a08936d04837493566357d0f9064f7ab2d4dd05c9200a34d3a391c374e5d142]
	I0927 01:44:27.026175  760583 ssh_runner.go:195] Run: which crictl
	I0927 01:44:27.030228  760583 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:27.030310  760583 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:27.084820  760583 cri.go:89] found id: "4e2c243d3f202b9ba2c3ed3fc36004a1a354f525c1af9a3a18e0e3869db55b31"
	I0927 01:44:27.084843  760583 cri.go:89] found id: ""
	I0927 01:44:27.084852  760583 logs.go:276] 1 containers: [4e2c243d3f202b9ba2c3ed3fc36004a1a354f525c1af9a3a18e0e3869db55b31]
	I0927 01:44:27.084912  760583 ssh_runner.go:195] Run: which crictl
	I0927 01:44:27.089536  760583 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:27.089625  760583 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:27.150769  760583 cri.go:89] found id: "97f3f29a6087ab159457c975afe52b5df679696e2540a7475dd07cdd566a6421"
	I0927 01:44:27.150794  760583 cri.go:89] found id: ""
	I0927 01:44:27.150802  760583 logs.go:276] 1 containers: [97f3f29a6087ab159457c975afe52b5df679696e2540a7475dd07cdd566a6421]
	I0927 01:44:27.150859  760583 ssh_runner.go:195] Run: which crictl
	I0927 01:44:27.155318  760583 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:27.155396  760583 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:27.214939  760583 cri.go:89] found id: "85064458a613d60c53b3ae0b7ee87b46ab9bf4d9c8e7335526ec77fdd90da0a1"
	I0927 01:44:27.214963  760583 cri.go:89] found id: ""
	I0927 01:44:27.214971  760583 logs.go:276] 1 containers: [85064458a613d60c53b3ae0b7ee87b46ab9bf4d9c8e7335526ec77fdd90da0a1]
	I0927 01:44:27.215026  760583 ssh_runner.go:195] Run: which crictl
	I0927 01:44:27.220857  760583 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:27.220933  760583 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:27.278961  760583 cri.go:89] found id: "af121b47fb0451e00928c37c7ea5dbdc34876fc9c3f26bca2c8ccc33724b472f"
	I0927 01:44:27.278988  760583 cri.go:89] found id: ""
	I0927 01:44:27.278996  760583 logs.go:276] 1 containers: [af121b47fb0451e00928c37c7ea5dbdc34876fc9c3f26bca2c8ccc33724b472f]
	I0927 01:44:27.279050  760583 ssh_runner.go:195] Run: which crictl
	I0927 01:44:27.283928  760583 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:27.284002  760583 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:27.331343  760583 cri.go:89] found id: "4ed220057618f5e67044311ba1ef3a6e29470c322670689f1ccd2851aa044567"
	I0927 01:44:27.331363  760583 cri.go:89] found id: ""
	I0927 01:44:27.331372  760583 logs.go:276] 1 containers: [4ed220057618f5e67044311ba1ef3a6e29470c322670689f1ccd2851aa044567]
	I0927 01:44:27.331425  760583 ssh_runner.go:195] Run: which crictl
	I0927 01:44:27.335118  760583 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0927 01:44:27.335228  760583 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0927 01:44:27.419376  760583 cri.go:89] found id: "81dfc52c2caedc6ccc568a3c88ce90a33e16b5fa1ad78c1ab5aacb0f6c1e296f"
	I0927 01:44:27.419442  760583 cri.go:89] found id: "4fcb5b5f061ff6b3956e777a227057c550db431ed0232f4f4c4ea746b3f08257"
	I0927 01:44:27.419462  760583 cri.go:89] found id: ""
	I0927 01:44:27.419488  760583 logs.go:276] 2 containers: [81dfc52c2caedc6ccc568a3c88ce90a33e16b5fa1ad78c1ab5aacb0f6c1e296f 4fcb5b5f061ff6b3956e777a227057c550db431ed0232f4f4c4ea746b3f08257]
	I0927 01:44:27.419572  760583 ssh_runner.go:195] Run: which crictl
	I0927 01:44:27.423762  760583 ssh_runner.go:195] Run: which crictl
	I0927 01:44:27.428154  760583 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:27.428213  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:27.447555  760583 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:27.447630  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 01:44:27.604449  760583 logs.go:123] Gathering logs for kindnet [af121b47fb0451e00928c37c7ea5dbdc34876fc9c3f26bca2c8ccc33724b472f] ...
	I0927 01:44:27.604480  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af121b47fb0451e00928c37c7ea5dbdc34876fc9c3f26bca2c8ccc33724b472f"
	I0927 01:44:27.662509  760583 logs.go:123] Gathering logs for storage-provisioner [81dfc52c2caedc6ccc568a3c88ce90a33e16b5fa1ad78c1ab5aacb0f6c1e296f] ...
	I0927 01:44:27.662539  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81dfc52c2caedc6ccc568a3c88ce90a33e16b5fa1ad78c1ab5aacb0f6c1e296f"
	I0927 01:44:27.719957  760583 logs.go:123] Gathering logs for container status ...
	I0927 01:44:27.719981  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:27.800479  760583 logs.go:123] Gathering logs for coredns [0a08936d04837493566357d0f9064f7ab2d4dd05c9200a34d3a391c374e5d142] ...
	I0927 01:44:27.800510  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a08936d04837493566357d0f9064f7ab2d4dd05c9200a34d3a391c374e5d142"
	I0927 01:44:27.863618  760583 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:27.863773  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 01:44:27.920634  760583 logs.go:138] Found kubelet problem: Sep 27 01:40:09 no-preload-874305 kubelet[744]: W0927 01:40:09.065999     744 reflector.go:561] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-874305" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-874305' and this object
	W0927 01:44:27.920893  760583 logs.go:138] Found kubelet problem: Sep 27 01:40:09 no-preload-874305 kubelet[744]: E0927 01:40:09.066042     744 reflector.go:158] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-874305\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-874305' and this object" logger="UnhandledError"
	I0927 01:44:27.987954  760583 logs.go:123] Gathering logs for kube-proxy [97f3f29a6087ab159457c975afe52b5df679696e2540a7475dd07cdd566a6421] ...
	I0927 01:44:27.987995  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 97f3f29a6087ab159457c975afe52b5df679696e2540a7475dd07cdd566a6421"
	I0927 01:44:28.049357  760583 logs.go:123] Gathering logs for kubernetes-dashboard [4ed220057618f5e67044311ba1ef3a6e29470c322670689f1ccd2851aa044567] ...
	I0927 01:44:28.049389  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ed220057618f5e67044311ba1ef3a6e29470c322670689f1ccd2851aa044567"
	I0927 01:44:28.109886  760583 logs.go:123] Gathering logs for storage-provisioner [4fcb5b5f061ff6b3956e777a227057c550db431ed0232f4f4c4ea746b3f08257] ...
	I0927 01:44:28.109916  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4fcb5b5f061ff6b3956e777a227057c550db431ed0232f4f4c4ea746b3f08257"
	I0927 01:44:28.167978  760583 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:28.168011  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:28.269689  760583 logs.go:123] Gathering logs for kube-apiserver [09dcb3356c6a19c4a374fc865a3369660b39e464338f69ef2b075183fc110f3e] ...
	I0927 01:44:28.269725  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09dcb3356c6a19c4a374fc865a3369660b39e464338f69ef2b075183fc110f3e"
	I0927 01:44:28.362483  760583 logs.go:123] Gathering logs for etcd [ba6bd8e0809785f219e889100450616432c1d9c8273b84573fff2275adc662c7] ...
	I0927 01:44:28.362526  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba6bd8e0809785f219e889100450616432c1d9c8273b84573fff2275adc662c7"
	I0927 01:44:28.422457  760583 logs.go:123] Gathering logs for kube-scheduler [4e2c243d3f202b9ba2c3ed3fc36004a1a354f525c1af9a3a18e0e3869db55b31] ...
	I0927 01:44:28.422497  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e2c243d3f202b9ba2c3ed3fc36004a1a354f525c1af9a3a18e0e3869db55b31"
	I0927 01:44:28.497166  760583 logs.go:123] Gathering logs for kube-controller-manager [85064458a613d60c53b3ae0b7ee87b46ab9bf4d9c8e7335526ec77fdd90da0a1] ...
	I0927 01:44:28.497208  760583 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 85064458a613d60c53b3ae0b7ee87b46ab9bf4d9c8e7335526ec77fdd90da0a1"
	I0927 01:44:28.612607  760583 out.go:358] Setting ErrFile to fd 2...
	I0927 01:44:28.612645  760583 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 01:44:28.612741  760583 out.go:270] X Problems detected in kubelet:
	W0927 01:44:28.612770  760583 out.go:270]   Sep 27 01:40:09 no-preload-874305 kubelet[744]: W0927 01:40:09.065999     744 reflector.go:561] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-874305" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-874305' and this object
	W0927 01:44:28.612907  760583 out.go:270]   Sep 27 01:40:09 no-preload-874305 kubelet[744]: E0927 01:40:09.066042     744 reflector.go:158] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-874305\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-874305' and this object" logger="UnhandledError"
	I0927 01:44:28.612922  760583 out.go:358] Setting ErrFile to fd 2...
	I0927 01:44:28.612931  760583 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:44:27.057365  756367 pod_ready.go:82] duration metric: took 4m0.000050103s for pod "metrics-server-9975d5f86-5bphl" in "kube-system" namespace to be "Ready" ...
	E0927 01:44:27.057440  756367 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0927 01:44:27.057470  756367 pod_ready.go:39] duration metric: took 5m26.719711701s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 01:44:27.057518  756367 api_server.go:52] waiting for apiserver process to appear ...
	I0927 01:44:27.057586  756367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:27.057674  756367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:27.114452  756367 cri.go:89] found id: "728631fe1253bab9992c7a58f88fca5a34491a3d06d1a6601e0e70566e7d10f4"
	I0927 01:44:27.114473  756367 cri.go:89] found id: ""
	I0927 01:44:27.114482  756367 logs.go:276] 1 containers: [728631fe1253bab9992c7a58f88fca5a34491a3d06d1a6601e0e70566e7d10f4]
	I0927 01:44:27.114537  756367 ssh_runner.go:195] Run: which crictl
	I0927 01:44:27.120741  756367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:27.120814  756367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:27.166103  756367 cri.go:89] found id: "fbfae056e26dd1b15e6b109b732296bcb89c5db7fabc0f8958574a0fc1248e81"
	I0927 01:44:27.166124  756367 cri.go:89] found id: ""
	I0927 01:44:27.166132  756367 logs.go:276] 1 containers: [fbfae056e26dd1b15e6b109b732296bcb89c5db7fabc0f8958574a0fc1248e81]
	I0927 01:44:27.166190  756367 ssh_runner.go:195] Run: which crictl
	I0927 01:44:27.172322  756367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:27.172395  756367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:27.223120  756367 cri.go:89] found id: "6a28aab70700044c53668d4eceed46b67fba50e2d134c97b4c5cdd9f83c81e4a"
	I0927 01:44:27.223140  756367 cri.go:89] found id: ""
	I0927 01:44:27.223148  756367 logs.go:276] 1 containers: [6a28aab70700044c53668d4eceed46b67fba50e2d134c97b4c5cdd9f83c81e4a]
	I0927 01:44:27.223201  756367 ssh_runner.go:195] Run: which crictl
	I0927 01:44:27.227267  756367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:27.227386  756367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:27.286086  756367 cri.go:89] found id: "1399d95796260c7e27b7a91a70576c5a3e3bfcfee9fb91839ddfe1b01c5114c0"
	I0927 01:44:27.286147  756367 cri.go:89] found id: ""
	I0927 01:44:27.286169  756367 logs.go:276] 1 containers: [1399d95796260c7e27b7a91a70576c5a3e3bfcfee9fb91839ddfe1b01c5114c0]
	I0927 01:44:27.286259  756367 ssh_runner.go:195] Run: which crictl
	I0927 01:44:27.290363  756367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:27.290479  756367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:27.347335  756367 cri.go:89] found id: "1f91c6f77f281dbe05880609e73acc353cb7ee468afa455c7d22c45e9428661e"
	I0927 01:44:27.347409  756367 cri.go:89] found id: ""
	I0927 01:44:27.347440  756367 logs.go:276] 1 containers: [1f91c6f77f281dbe05880609e73acc353cb7ee468afa455c7d22c45e9428661e]
	I0927 01:44:27.347530  756367 ssh_runner.go:195] Run: which crictl
	I0927 01:44:27.380567  756367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:27.380651  756367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:27.427366  756367 cri.go:89] found id: "fd21817034d06ff8ec30b1fa5089ddfd190af71c8dfdcde0d32fd61181caafaa"
	I0927 01:44:27.427387  756367 cri.go:89] found id: ""
	I0927 01:44:27.427395  756367 logs.go:276] 1 containers: [fd21817034d06ff8ec30b1fa5089ddfd190af71c8dfdcde0d32fd61181caafaa]
	I0927 01:44:27.427452  756367 ssh_runner.go:195] Run: which crictl
	I0927 01:44:27.433497  756367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:27.433617  756367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:27.485055  756367 cri.go:89] found id: "f74f5a74223b633c3ca99ac7465604311677c0c0b43adeacf4843a7ade66ca48"
	I0927 01:44:27.485130  756367 cri.go:89] found id: ""
	I0927 01:44:27.485154  756367 logs.go:276] 1 containers: [f74f5a74223b633c3ca99ac7465604311677c0c0b43adeacf4843a7ade66ca48]
	I0927 01:44:27.485247  756367 ssh_runner.go:195] Run: which crictl
	I0927 01:44:27.489063  756367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:27.489133  756367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:27.547438  756367 cri.go:89] found id: "7a82c31e2981413dba52726ae93a9b6786429d4055c821906975f6a58bb6787c"
	I0927 01:44:27.547463  756367 cri.go:89] found id: ""
	I0927 01:44:27.547471  756367 logs.go:276] 1 containers: [7a82c31e2981413dba52726ae93a9b6786429d4055c821906975f6a58bb6787c]
	I0927 01:44:27.547523  756367 ssh_runner.go:195] Run: which crictl
	I0927 01:44:27.551928  756367 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0927 01:44:27.551996  756367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0927 01:44:27.598982  756367 cri.go:89] found id: "138c41837cf0bbda88ce0b41c3d306956ef57ff9c2cb680fa71af1f16e609832"
	I0927 01:44:27.599002  756367 cri.go:89] found id: ""
	I0927 01:44:27.599009  756367 logs.go:276] 1 containers: [138c41837cf0bbda88ce0b41c3d306956ef57ff9c2cb680fa71af1f16e609832]
	I0927 01:44:27.599063  756367 ssh_runner.go:195] Run: which crictl
	I0927 01:44:27.609032  756367 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:27.609060  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 01:44:27.686472  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:00 old-k8s-version-745133 kubelet[736]: E0927 01:39:00.276608     736 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-745133" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-745133' and this object
	W0927 01:44:27.686776  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:00 old-k8s-version-745133 kubelet[736]: E0927 01:39:00.277156     736 reflector.go:138] object-"kube-system"/"kube-proxy-token-mdl25": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-mdl25" is forbidden: User "system:node:old-k8s-version-745133" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-745133' and this object
	W0927 01:44:27.687011  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:00 old-k8s-version-745133 kubelet[736]: E0927 01:39:00.277414     736 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-745133" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-745133' and this object
	W0927 01:44:27.687250  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:00 old-k8s-version-745133 kubelet[736]: E0927 01:39:00.277602     736 reflector.go:138] object-"kube-system"/"kindnet-token-jwlc6": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-jwlc6" is forbidden: User "system:node:old-k8s-version-745133" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-745133' and this object
	W0927 01:44:27.687495  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:00 old-k8s-version-745133 kubelet[736]: E0927 01:39:00.277796     736 reflector.go:138] object-"kube-system"/"coredns-token-k4cmv": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-k4cmv" is forbidden: User "system:node:old-k8s-version-745133" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-745133' and this object
	W0927 01:44:27.687749  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:00 old-k8s-version-745133 kubelet[736]: E0927 01:39:00.277995     736 reflector.go:138] object-"kube-system"/"storage-provisioner-token-tgp2f": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-tgp2f" is forbidden: User "system:node:old-k8s-version-745133" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-745133' and this object
	W0927 01:44:27.687997  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:00 old-k8s-version-745133 kubelet[736]: E0927 01:39:00.278201     736 reflector.go:138] object-"kube-system"/"metrics-server-token-9xfpw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-9xfpw" is forbidden: User "system:node:old-k8s-version-745133" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-745133' and this object
	W0927 01:44:27.688232  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:00 old-k8s-version-745133 kubelet[736]: E0927 01:39:00.278383     736 reflector.go:138] object-"default"/"default-token-lm75v": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-lm75v" is forbidden: User "system:node:old-k8s-version-745133" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-745133' and this object
	W0927 01:44:27.697891  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:03 old-k8s-version-745133 kubelet[736]: E0927 01:39:03.038144     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0927 01:44:27.698105  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:03 old-k8s-version-745133 kubelet[736]: E0927 01:39:03.733711     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:27.700305  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:17 old-k8s-version-745133 kubelet[736]: E0927 01:39:17.695628     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0927 01:44:27.700757  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:19 old-k8s-version-745133 kubelet[736]: E0927 01:39:19.799354     736 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-hcwf2": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-hcwf2" is forbidden: User "system:node:old-k8s-version-745133" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-745133' and this object
	W0927 01:44:27.703093  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:28 old-k8s-version-745133 kubelet[736]: E0927 01:39:28.679675     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:27.703606  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:31 old-k8s-version-745133 kubelet[736]: E0927 01:39:31.971044     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.704100  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:32 old-k8s-version-745133 kubelet[736]: E0927 01:39:32.973106     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.704469  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:39 old-k8s-version-745133 kubelet[736]: E0927 01:39:39.607023     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.706606  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:43 old-k8s-version-745133 kubelet[736]: E0927 01:39:43.686128     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0927 01:44:27.707301  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:55 old-k8s-version-745133 kubelet[736]: E0927 01:39:55.013155     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.707523  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:58 old-k8s-version-745133 kubelet[736]: E0927 01:39:58.675275     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:27.707878  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:59 old-k8s-version-745133 kubelet[736]: E0927 01:39:59.607420     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.708088  756367 logs.go:138] Found kubelet problem: Sep 27 01:40:09 old-k8s-version-745133 kubelet[736]: E0927 01:40:09.675710     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:27.708444  756367 logs.go:138] Found kubelet problem: Sep 27 01:40:11 old-k8s-version-745133 kubelet[736]: E0927 01:40:11.674655     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.709077  756367 logs.go:138] Found kubelet problem: Sep 27 01:40:24 old-k8s-version-745133 kubelet[736]: E0927 01:40:24.053848     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.711317  756367 logs.go:138] Found kubelet problem: Sep 27 01:40:24 old-k8s-version-745133 kubelet[736]: E0927 01:40:24.685441     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0927 01:44:27.711682  756367 logs.go:138] Found kubelet problem: Sep 27 01:40:29 old-k8s-version-745133 kubelet[736]: E0927 01:40:29.607048     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.711894  756367 logs.go:138] Found kubelet problem: Sep 27 01:40:38 old-k8s-version-745133 kubelet[736]: E0927 01:40:38.675142     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:27.712250  756367 logs.go:138] Found kubelet problem: Sep 27 01:40:43 old-k8s-version-745133 kubelet[736]: E0927 01:40:43.674664     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.712463  756367 logs.go:138] Found kubelet problem: Sep 27 01:40:50 old-k8s-version-745133 kubelet[736]: E0927 01:40:50.675165     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:27.712820  756367 logs.go:138] Found kubelet problem: Sep 27 01:40:57 old-k8s-version-745133 kubelet[736]: E0927 01:40:57.675462     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.713034  756367 logs.go:138] Found kubelet problem: Sep 27 01:41:04 old-k8s-version-745133 kubelet[736]: E0927 01:41:04.679852     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:27.713655  756367 logs.go:138] Found kubelet problem: Sep 27 01:41:11 old-k8s-version-745133 kubelet[736]: E0927 01:41:11.121508     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.713899  756367 logs.go:138] Found kubelet problem: Sep 27 01:41:17 old-k8s-version-745133 kubelet[736]: E0927 01:41:17.676076     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:27.714253  756367 logs.go:138] Found kubelet problem: Sep 27 01:41:19 old-k8s-version-745133 kubelet[736]: E0927 01:41:19.607004     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.714545  756367 logs.go:138] Found kubelet problem: Sep 27 01:41:28 old-k8s-version-745133 kubelet[736]: E0927 01:41:28.675474     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:27.714933  756367 logs.go:138] Found kubelet problem: Sep 27 01:41:33 old-k8s-version-745133 kubelet[736]: E0927 01:41:33.674670     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.715143  756367 logs.go:138] Found kubelet problem: Sep 27 01:41:41 old-k8s-version-745133 kubelet[736]: E0927 01:41:41.675136     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:27.715561  756367 logs.go:138] Found kubelet problem: Sep 27 01:41:46 old-k8s-version-745133 kubelet[736]: E0927 01:41:46.674672     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.721157  756367 logs.go:138] Found kubelet problem: Sep 27 01:41:55 old-k8s-version-745133 kubelet[736]: E0927 01:41:55.687901     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0927 01:44:27.721536  756367 logs.go:138] Found kubelet problem: Sep 27 01:42:01 old-k8s-version-745133 kubelet[736]: E0927 01:42:01.675876     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.721796  756367 logs.go:138] Found kubelet problem: Sep 27 01:42:10 old-k8s-version-745133 kubelet[736]: E0927 01:42:10.675343     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:27.722157  756367 logs.go:138] Found kubelet problem: Sep 27 01:42:13 old-k8s-version-745133 kubelet[736]: E0927 01:42:13.675175     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.722367  756367 logs.go:138] Found kubelet problem: Sep 27 01:42:22 old-k8s-version-745133 kubelet[736]: E0927 01:42:22.675195     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:27.722740  756367 logs.go:138] Found kubelet problem: Sep 27 01:42:26 old-k8s-version-745133 kubelet[736]: E0927 01:42:26.674767     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.722949  756367 logs.go:138] Found kubelet problem: Sep 27 01:42:33 old-k8s-version-745133 kubelet[736]: E0927 01:42:33.675487     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:27.723571  756367 logs.go:138] Found kubelet problem: Sep 27 01:42:42 old-k8s-version-745133 kubelet[736]: E0927 01:42:42.264610     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.723782  756367 logs.go:138] Found kubelet problem: Sep 27 01:42:47 old-k8s-version-745133 kubelet[736]: E0927 01:42:47.675472     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:27.724146  756367 logs.go:138] Found kubelet problem: Sep 27 01:42:49 old-k8s-version-745133 kubelet[736]: E0927 01:42:49.615148     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.724507  756367 logs.go:138] Found kubelet problem: Sep 27 01:43:01 old-k8s-version-745133 kubelet[736]: E0927 01:43:01.674781     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.724717  756367 logs.go:138] Found kubelet problem: Sep 27 01:43:01 old-k8s-version-745133 kubelet[736]: E0927 01:43:01.675864     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:27.724926  756367 logs.go:138] Found kubelet problem: Sep 27 01:43:13 old-k8s-version-745133 kubelet[736]: E0927 01:43:13.675722     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:27.725397  756367 logs.go:138] Found kubelet problem: Sep 27 01:43:15 old-k8s-version-745133 kubelet[736]: E0927 01:43:15.674635     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.725777  756367 logs.go:138] Found kubelet problem: Sep 27 01:43:26 old-k8s-version-745133 kubelet[736]: E0927 01:43:26.674663     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.725988  756367 logs.go:138] Found kubelet problem: Sep 27 01:43:28 old-k8s-version-745133 kubelet[736]: E0927 01:43:28.675242     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:27.726198  756367 logs.go:138] Found kubelet problem: Sep 27 01:43:39 old-k8s-version-745133 kubelet[736]: E0927 01:43:39.675681     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:27.726553  756367 logs.go:138] Found kubelet problem: Sep 27 01:43:40 old-k8s-version-745133 kubelet[736]: E0927 01:43:40.674684     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.727033  756367 logs.go:138] Found kubelet problem: Sep 27 01:43:50 old-k8s-version-745133 kubelet[736]: E0927 01:43:50.675166     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:27.727415  756367 logs.go:138] Found kubelet problem: Sep 27 01:43:54 old-k8s-version-745133 kubelet[736]: E0927 01:43:54.675191     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.727656  756367 logs.go:138] Found kubelet problem: Sep 27 01:44:02 old-k8s-version-745133 kubelet[736]: E0927 01:44:02.676623     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:27.728017  756367 logs.go:138] Found kubelet problem: Sep 27 01:44:06 old-k8s-version-745133 kubelet[736]: E0927 01:44:06.674628     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:27.728228  756367 logs.go:138] Found kubelet problem: Sep 27 01:44:17 old-k8s-version-745133 kubelet[736]: E0927 01:44:17.675590     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:27.728616  756367 logs.go:138] Found kubelet problem: Sep 27 01:44:21 old-k8s-version-745133 kubelet[736]: E0927 01:44:21.674844     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	I0927 01:44:27.728630  756367 logs.go:123] Gathering logs for kube-controller-manager [fd21817034d06ff8ec30b1fa5089ddfd190af71c8dfdcde0d32fd61181caafaa] ...
	I0927 01:44:27.728645  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd21817034d06ff8ec30b1fa5089ddfd190af71c8dfdcde0d32fd61181caafaa"
	I0927 01:44:27.845793  756367 logs.go:123] Gathering logs for kindnet [f74f5a74223b633c3ca99ac7465604311677c0c0b43adeacf4843a7ade66ca48] ...
	I0927 01:44:27.845828  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f74f5a74223b633c3ca99ac7465604311677c0c0b43adeacf4843a7ade66ca48"
	I0927 01:44:27.929600  756367 logs.go:123] Gathering logs for kubernetes-dashboard [7a82c31e2981413dba52726ae93a9b6786429d4055c821906975f6a58bb6787c] ...
	I0927 01:44:27.929628  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a82c31e2981413dba52726ae93a9b6786429d4055c821906975f6a58bb6787c"
	I0927 01:44:28.012427  756367 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:28.012457  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:28.034915  756367 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:28.034993  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:28.149972  756367 logs.go:123] Gathering logs for kube-apiserver [728631fe1253bab9992c7a58f88fca5a34491a3d06d1a6601e0e70566e7d10f4] ...
	I0927 01:44:28.150052  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 728631fe1253bab9992c7a58f88fca5a34491a3d06d1a6601e0e70566e7d10f4"
	I0927 01:44:28.283007  756367 logs.go:123] Gathering logs for kube-scheduler [1399d95796260c7e27b7a91a70576c5a3e3bfcfee9fb91839ddfe1b01c5114c0] ...
	I0927 01:44:28.283106  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1399d95796260c7e27b7a91a70576c5a3e3bfcfee9fb91839ddfe1b01c5114c0"
	I0927 01:44:28.353690  756367 logs.go:123] Gathering logs for kube-proxy [1f91c6f77f281dbe05880609e73acc353cb7ee468afa455c7d22c45e9428661e] ...
	I0927 01:44:28.353785  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f91c6f77f281dbe05880609e73acc353cb7ee468afa455c7d22c45e9428661e"
	I0927 01:44:28.428293  756367 logs.go:123] Gathering logs for container status ...
	I0927 01:44:28.428328  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:28.500951  756367 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:28.501042  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 01:44:28.691339  756367 logs.go:123] Gathering logs for etcd [fbfae056e26dd1b15e6b109b732296bcb89c5db7fabc0f8958574a0fc1248e81] ...
	I0927 01:44:28.691372  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbfae056e26dd1b15e6b109b732296bcb89c5db7fabc0f8958574a0fc1248e81"
	I0927 01:44:28.752406  756367 logs.go:123] Gathering logs for coredns [6a28aab70700044c53668d4eceed46b67fba50e2d134c97b4c5cdd9f83c81e4a] ...
	I0927 01:44:28.752437  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a28aab70700044c53668d4eceed46b67fba50e2d134c97b4c5cdd9f83c81e4a"
	I0927 01:44:28.791959  756367 logs.go:123] Gathering logs for storage-provisioner [138c41837cf0bbda88ce0b41c3d306956ef57ff9c2cb680fa71af1f16e609832] ...
	I0927 01:44:28.791991  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 138c41837cf0bbda88ce0b41c3d306956ef57ff9c2cb680fa71af1f16e609832"
	I0927 01:44:28.833586  756367 out.go:358] Setting ErrFile to fd 2...
	I0927 01:44:28.833613  756367 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 01:44:28.833684  756367 out.go:270] X Problems detected in kubelet:
	W0927 01:44:28.833702  756367 out.go:270]   Sep 27 01:43:54 old-k8s-version-745133 kubelet[736]: E0927 01:43:54.675191     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:28.833710  756367 out.go:270]   Sep 27 01:44:02 old-k8s-version-745133 kubelet[736]: E0927 01:44:02.676623     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:28.833752  756367 out.go:270]   Sep 27 01:44:06 old-k8s-version-745133 kubelet[736]: E0927 01:44:06.674628     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:28.833765  756367 out.go:270]   Sep 27 01:44:17 old-k8s-version-745133 kubelet[736]: E0927 01:44:17.675590     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:28.833773  756367 out.go:270]   Sep 27 01:44:21 old-k8s-version-745133 kubelet[736]: E0927 01:44:21.674844     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	I0927 01:44:28.833795  756367 out.go:358] Setting ErrFile to fd 2...
	I0927 01:44:28.833815  756367 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:44:38.620864  760583 system_pods.go:59] 9 kube-system pods found
	I0927 01:44:38.620900  760583 system_pods.go:61] "coredns-7c65d6cfc9-5wdrw" [cbb570f9-ad5b-4234-8a8f-87c1979edbde] Running
	I0927 01:44:38.620908  760583 system_pods.go:61] "etcd-no-preload-874305" [87ac9caf-0333-4a2b-b20a-3001373bad27] Running
	I0927 01:44:38.620913  760583 system_pods.go:61] "kindnet-pchqt" [d35c4808-e965-450b-bf50-6232196b792f] Running
	I0927 01:44:38.620917  760583 system_pods.go:61] "kube-apiserver-no-preload-874305" [998d3df2-a118-4fbf-8ffb-baf15badc4da] Running
	I0927 01:44:38.620921  760583 system_pods.go:61] "kube-controller-manager-no-preload-874305" [a77b21bd-8e77-48cd-a9f7-b9aca27f30e5] Running
	I0927 01:44:38.620925  760583 system_pods.go:61] "kube-proxy-mghm9" [69e7e456-1607-4fe3-800f-c1089861982c] Running
	I0927 01:44:38.620930  760583 system_pods.go:61] "kube-scheduler-no-preload-874305" [81b0b1e9-c100-443e-97ac-ec2b8a9b57ca] Running
	I0927 01:44:38.620938  760583 system_pods.go:61] "metrics-server-6867b74b74-t5lkb" [24e2a140-bc6a-489b-958b-bb2fc8372734] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:44:38.620943  760583 system_pods.go:61] "storage-provisioner" [360de921-071c-413e-a12f-a02d6a3ba426] Running
	I0927 01:44:38.620950  760583 system_pods.go:74] duration metric: took 11.730992143s to wait for pod list to return data ...
	I0927 01:44:38.620966  760583 default_sa.go:34] waiting for default service account to be created ...
	I0927 01:44:38.623836  760583 default_sa.go:45] found service account: "default"
	I0927 01:44:38.623866  760583 default_sa.go:55] duration metric: took 2.893613ms for default service account to be created ...
	I0927 01:44:38.623877  760583 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 01:44:38.629496  760583 system_pods.go:86] 9 kube-system pods found
	I0927 01:44:38.629530  760583 system_pods.go:89] "coredns-7c65d6cfc9-5wdrw" [cbb570f9-ad5b-4234-8a8f-87c1979edbde] Running
	I0927 01:44:38.629538  760583 system_pods.go:89] "etcd-no-preload-874305" [87ac9caf-0333-4a2b-b20a-3001373bad27] Running
	I0927 01:44:38.629542  760583 system_pods.go:89] "kindnet-pchqt" [d35c4808-e965-450b-bf50-6232196b792f] Running
	I0927 01:44:38.629548  760583 system_pods.go:89] "kube-apiserver-no-preload-874305" [998d3df2-a118-4fbf-8ffb-baf15badc4da] Running
	I0927 01:44:38.629554  760583 system_pods.go:89] "kube-controller-manager-no-preload-874305" [a77b21bd-8e77-48cd-a9f7-b9aca27f30e5] Running
	I0927 01:44:38.629559  760583 system_pods.go:89] "kube-proxy-mghm9" [69e7e456-1607-4fe3-800f-c1089861982c] Running
	I0927 01:44:38.629565  760583 system_pods.go:89] "kube-scheduler-no-preload-874305" [81b0b1e9-c100-443e-97ac-ec2b8a9b57ca] Running
	I0927 01:44:38.629573  760583 system_pods.go:89] "metrics-server-6867b74b74-t5lkb" [24e2a140-bc6a-489b-958b-bb2fc8372734] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 01:44:38.629578  760583 system_pods.go:89] "storage-provisioner" [360de921-071c-413e-a12f-a02d6a3ba426] Running
	I0927 01:44:38.629591  760583 system_pods.go:126] duration metric: took 5.709705ms to wait for k8s-apps to be running ...
	I0927 01:44:38.629602  760583 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 01:44:38.629662  760583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:44:38.643658  760583 system_svc.go:56] duration metric: took 14.038795ms WaitForService to wait for kubelet
	I0927 01:44:38.643685  760583 kubeadm.go:582] duration metric: took 4m41.318593294s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 01:44:38.643707  760583 node_conditions.go:102] verifying NodePressure condition ...
	I0927 01:44:38.647341  760583 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0927 01:44:38.647376  760583 node_conditions.go:123] node cpu capacity is 2
	I0927 01:44:38.647389  760583 node_conditions.go:105] duration metric: took 3.67588ms to run NodePressure ...
	I0927 01:44:38.647402  760583 start.go:241] waiting for startup goroutines ...
	I0927 01:44:38.647419  760583 start.go:246] waiting for cluster config update ...
	I0927 01:44:38.647434  760583 start.go:255] writing updated cluster config ...
	I0927 01:44:38.647732  760583 ssh_runner.go:195] Run: rm -f paused
	I0927 01:44:38.714519  760583 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0927 01:44:38.719037  760583 out.go:177] * Done! kubectl is now configured to use "no-preload-874305" cluster and "default" namespace by default
	I0927 01:44:38.835711  756367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:44:38.852118  756367 api_server.go:72] duration metric: took 5m59.130981288s to wait for apiserver process to appear ...
	I0927 01:44:38.852145  756367 api_server.go:88] waiting for apiserver healthz status ...
	I0927 01:44:38.852204  756367 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 01:44:38.852315  756367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 01:44:38.922459  756367 cri.go:89] found id: "728631fe1253bab9992c7a58f88fca5a34491a3d06d1a6601e0e70566e7d10f4"
	I0927 01:44:38.922483  756367 cri.go:89] found id: ""
	I0927 01:44:38.922490  756367 logs.go:276] 1 containers: [728631fe1253bab9992c7a58f88fca5a34491a3d06d1a6601e0e70566e7d10f4]
	I0927 01:44:38.922545  756367 ssh_runner.go:195] Run: which crictl
	I0927 01:44:38.927062  756367 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 01:44:38.927133  756367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 01:44:38.981224  756367 cri.go:89] found id: "fbfae056e26dd1b15e6b109b732296bcb89c5db7fabc0f8958574a0fc1248e81"
	I0927 01:44:38.981250  756367 cri.go:89] found id: ""
	I0927 01:44:38.981259  756367 logs.go:276] 1 containers: [fbfae056e26dd1b15e6b109b732296bcb89c5db7fabc0f8958574a0fc1248e81]
	I0927 01:44:38.981318  756367 ssh_runner.go:195] Run: which crictl
	I0927 01:44:38.986555  756367 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 01:44:38.986636  756367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 01:44:39.039294  756367 cri.go:89] found id: "6a28aab70700044c53668d4eceed46b67fba50e2d134c97b4c5cdd9f83c81e4a"
	I0927 01:44:39.039320  756367 cri.go:89] found id: ""
	I0927 01:44:39.039328  756367 logs.go:276] 1 containers: [6a28aab70700044c53668d4eceed46b67fba50e2d134c97b4c5cdd9f83c81e4a]
	I0927 01:44:39.039386  756367 ssh_runner.go:195] Run: which crictl
	I0927 01:44:39.043454  756367 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 01:44:39.043529  756367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 01:44:39.104793  756367 cri.go:89] found id: "1399d95796260c7e27b7a91a70576c5a3e3bfcfee9fb91839ddfe1b01c5114c0"
	I0927 01:44:39.104822  756367 cri.go:89] found id: ""
	I0927 01:44:39.104831  756367 logs.go:276] 1 containers: [1399d95796260c7e27b7a91a70576c5a3e3bfcfee9fb91839ddfe1b01c5114c0]
	I0927 01:44:39.104884  756367 ssh_runner.go:195] Run: which crictl
	I0927 01:44:39.110103  756367 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 01:44:39.110168  756367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 01:44:39.174469  756367 cri.go:89] found id: "1f91c6f77f281dbe05880609e73acc353cb7ee468afa455c7d22c45e9428661e"
	I0927 01:44:39.174497  756367 cri.go:89] found id: ""
	I0927 01:44:39.174505  756367 logs.go:276] 1 containers: [1f91c6f77f281dbe05880609e73acc353cb7ee468afa455c7d22c45e9428661e]
	I0927 01:44:39.174562  756367 ssh_runner.go:195] Run: which crictl
	I0927 01:44:39.178544  756367 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 01:44:39.178620  756367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 01:44:39.229152  756367 cri.go:89] found id: "fd21817034d06ff8ec30b1fa5089ddfd190af71c8dfdcde0d32fd61181caafaa"
	I0927 01:44:39.229176  756367 cri.go:89] found id: ""
	I0927 01:44:39.229184  756367 logs.go:276] 1 containers: [fd21817034d06ff8ec30b1fa5089ddfd190af71c8dfdcde0d32fd61181caafaa]
	I0927 01:44:39.229244  756367 ssh_runner.go:195] Run: which crictl
	I0927 01:44:39.232807  756367 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 01:44:39.232877  756367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 01:44:39.273216  756367 cri.go:89] found id: "f74f5a74223b633c3ca99ac7465604311677c0c0b43adeacf4843a7ade66ca48"
	I0927 01:44:39.273239  756367 cri.go:89] found id: ""
	I0927 01:44:39.273247  756367 logs.go:276] 1 containers: [f74f5a74223b633c3ca99ac7465604311677c0c0b43adeacf4843a7ade66ca48]
	I0927 01:44:39.273305  756367 ssh_runner.go:195] Run: which crictl
	I0927 01:44:39.277150  756367 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0927 01:44:39.277226  756367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0927 01:44:39.315705  756367 cri.go:89] found id: "138c41837cf0bbda88ce0b41c3d306956ef57ff9c2cb680fa71af1f16e609832"
	I0927 01:44:39.315727  756367 cri.go:89] found id: ""
	I0927 01:44:39.315734  756367 logs.go:276] 1 containers: [138c41837cf0bbda88ce0b41c3d306956ef57ff9c2cb680fa71af1f16e609832]
	I0927 01:44:39.315791  756367 ssh_runner.go:195] Run: which crictl
	I0927 01:44:39.319422  756367 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0927 01:44:39.319493  756367 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0927 01:44:39.374266  756367 cri.go:89] found id: "7a82c31e2981413dba52726ae93a9b6786429d4055c821906975f6a58bb6787c"
	I0927 01:44:39.374289  756367 cri.go:89] found id: ""
	I0927 01:44:39.374297  756367 logs.go:276] 1 containers: [7a82c31e2981413dba52726ae93a9b6786429d4055c821906975f6a58bb6787c]
	I0927 01:44:39.374359  756367 ssh_runner.go:195] Run: which crictl
	I0927 01:44:39.378605  756367 logs.go:123] Gathering logs for kindnet [f74f5a74223b633c3ca99ac7465604311677c0c0b43adeacf4843a7ade66ca48] ...
	I0927 01:44:39.378629  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f74f5a74223b633c3ca99ac7465604311677c0c0b43adeacf4843a7ade66ca48"
	I0927 01:44:39.431823  756367 logs.go:123] Gathering logs for kubernetes-dashboard [7a82c31e2981413dba52726ae93a9b6786429d4055c821906975f6a58bb6787c] ...
	I0927 01:44:39.431856  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a82c31e2981413dba52726ae93a9b6786429d4055c821906975f6a58bb6787c"
	I0927 01:44:39.476375  756367 logs.go:123] Gathering logs for container status ...
	I0927 01:44:39.476405  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0927 01:44:39.531425  756367 logs.go:123] Gathering logs for dmesg ...
	I0927 01:44:39.531456  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 01:44:39.548458  756367 logs.go:123] Gathering logs for kube-proxy [1f91c6f77f281dbe05880609e73acc353cb7ee468afa455c7d22c45e9428661e] ...
	I0927 01:44:39.548489  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f91c6f77f281dbe05880609e73acc353cb7ee468afa455c7d22c45e9428661e"
	I0927 01:44:39.587845  756367 logs.go:123] Gathering logs for kube-apiserver [728631fe1253bab9992c7a58f88fca5a34491a3d06d1a6601e0e70566e7d10f4] ...
	I0927 01:44:39.587877  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 728631fe1253bab9992c7a58f88fca5a34491a3d06d1a6601e0e70566e7d10f4"
	I0927 01:44:39.665236  756367 logs.go:123] Gathering logs for kube-scheduler [1399d95796260c7e27b7a91a70576c5a3e3bfcfee9fb91839ddfe1b01c5114c0] ...
	I0927 01:44:39.665275  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1399d95796260c7e27b7a91a70576c5a3e3bfcfee9fb91839ddfe1b01c5114c0"
	I0927 01:44:39.711276  756367 logs.go:123] Gathering logs for storage-provisioner [138c41837cf0bbda88ce0b41c3d306956ef57ff9c2cb680fa71af1f16e609832] ...
	I0927 01:44:39.711308  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 138c41837cf0bbda88ce0b41c3d306956ef57ff9c2cb680fa71af1f16e609832"
	I0927 01:44:39.751909  756367 logs.go:123] Gathering logs for CRI-O ...
	I0927 01:44:39.751939  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 01:44:39.832973  756367 logs.go:123] Gathering logs for etcd [fbfae056e26dd1b15e6b109b732296bcb89c5db7fabc0f8958574a0fc1248e81] ...
	I0927 01:44:39.833012  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fbfae056e26dd1b15e6b109b732296bcb89c5db7fabc0f8958574a0fc1248e81"
	I0927 01:44:39.887308  756367 logs.go:123] Gathering logs for coredns [6a28aab70700044c53668d4eceed46b67fba50e2d134c97b4c5cdd9f83c81e4a] ...
	I0927 01:44:39.887339  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a28aab70700044c53668d4eceed46b67fba50e2d134c97b4c5cdd9f83c81e4a"
	I0927 01:44:39.928050  756367 logs.go:123] Gathering logs for kube-controller-manager [fd21817034d06ff8ec30b1fa5089ddfd190af71c8dfdcde0d32fd61181caafaa] ...
	I0927 01:44:39.928079  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd21817034d06ff8ec30b1fa5089ddfd190af71c8dfdcde0d32fd61181caafaa"
	I0927 01:44:40.029021  756367 logs.go:123] Gathering logs for kubelet ...
	I0927 01:44:40.029073  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0927 01:44:40.096759  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:00 old-k8s-version-745133 kubelet[736]: E0927 01:39:00.276608     736 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-745133" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-745133' and this object
	W0927 01:44:40.097055  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:00 old-k8s-version-745133 kubelet[736]: E0927 01:39:00.277156     736 reflector.go:138] object-"kube-system"/"kube-proxy-token-mdl25": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-mdl25" is forbidden: User "system:node:old-k8s-version-745133" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-745133' and this object
	W0927 01:44:40.097340  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:00 old-k8s-version-745133 kubelet[736]: E0927 01:39:00.277414     736 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-745133" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-745133' and this object
	W0927 01:44:40.097591  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:00 old-k8s-version-745133 kubelet[736]: E0927 01:39:00.277602     736 reflector.go:138] object-"kube-system"/"kindnet-token-jwlc6": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-jwlc6" is forbidden: User "system:node:old-k8s-version-745133" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-745133' and this object
	W0927 01:44:40.097838  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:00 old-k8s-version-745133 kubelet[736]: E0927 01:39:00.277796     736 reflector.go:138] object-"kube-system"/"coredns-token-k4cmv": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-k4cmv" is forbidden: User "system:node:old-k8s-version-745133" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-745133' and this object
	W0927 01:44:40.098101  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:00 old-k8s-version-745133 kubelet[736]: E0927 01:39:00.277995     736 reflector.go:138] object-"kube-system"/"storage-provisioner-token-tgp2f": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-tgp2f" is forbidden: User "system:node:old-k8s-version-745133" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-745133' and this object
	W0927 01:44:40.098367  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:00 old-k8s-version-745133 kubelet[736]: E0927 01:39:00.278201     736 reflector.go:138] object-"kube-system"/"metrics-server-token-9xfpw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-9xfpw" is forbidden: User "system:node:old-k8s-version-745133" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-745133' and this object
	W0927 01:44:40.098599  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:00 old-k8s-version-745133 kubelet[736]: E0927 01:39:00.278383     736 reflector.go:138] object-"default"/"default-token-lm75v": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-lm75v" is forbidden: User "system:node:old-k8s-version-745133" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-745133' and this object
	W0927 01:44:40.108947  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:03 old-k8s-version-745133 kubelet[736]: E0927 01:39:03.038144     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0927 01:44:40.109179  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:03 old-k8s-version-745133 kubelet[736]: E0927 01:39:03.733711     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.111478  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:17 old-k8s-version-745133 kubelet[736]: E0927 01:39:17.695628     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0927 01:44:40.111949  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:19 old-k8s-version-745133 kubelet[736]: E0927 01:39:19.799354     736 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-hcwf2": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-hcwf2" is forbidden: User "system:node:old-k8s-version-745133" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-745133' and this object
	W0927 01:44:40.113700  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:28 old-k8s-version-745133 kubelet[736]: E0927 01:39:28.679675     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.114248  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:31 old-k8s-version-745133 kubelet[736]: E0927 01:39:31.971044     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.114769  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:32 old-k8s-version-745133 kubelet[736]: E0927 01:39:32.973106     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.115110  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:39 old-k8s-version-745133 kubelet[736]: E0927 01:39:39.607023     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.117338  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:43 old-k8s-version-745133 kubelet[736]: E0927 01:39:43.686128     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0927 01:44:40.117990  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:55 old-k8s-version-745133 kubelet[736]: E0927 01:39:55.013155     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.118191  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:58 old-k8s-version-745133 kubelet[736]: E0927 01:39:58.675275     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.118552  756367 logs.go:138] Found kubelet problem: Sep 27 01:39:59 old-k8s-version-745133 kubelet[736]: E0927 01:39:59.607420     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.118760  756367 logs.go:138] Found kubelet problem: Sep 27 01:40:09 old-k8s-version-745133 kubelet[736]: E0927 01:40:09.675710     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.119110  756367 logs.go:138] Found kubelet problem: Sep 27 01:40:11 old-k8s-version-745133 kubelet[736]: E0927 01:40:11.674655     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.119744  756367 logs.go:138] Found kubelet problem: Sep 27 01:40:24 old-k8s-version-745133 kubelet[736]: E0927 01:40:24.053848     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.121980  756367 logs.go:138] Found kubelet problem: Sep 27 01:40:24 old-k8s-version-745133 kubelet[736]: E0927 01:40:24.685441     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0927 01:44:40.122341  756367 logs.go:138] Found kubelet problem: Sep 27 01:40:29 old-k8s-version-745133 kubelet[736]: E0927 01:40:29.607048     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.122545  756367 logs.go:138] Found kubelet problem: Sep 27 01:40:38 old-k8s-version-745133 kubelet[736]: E0927 01:40:38.675142     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.122921  756367 logs.go:138] Found kubelet problem: Sep 27 01:40:43 old-k8s-version-745133 kubelet[736]: E0927 01:40:43.674664     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.123130  756367 logs.go:138] Found kubelet problem: Sep 27 01:40:50 old-k8s-version-745133 kubelet[736]: E0927 01:40:50.675165     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.123528  756367 logs.go:138] Found kubelet problem: Sep 27 01:40:57 old-k8s-version-745133 kubelet[736]: E0927 01:40:57.675462     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.123736  756367 logs.go:138] Found kubelet problem: Sep 27 01:41:04 old-k8s-version-745133 kubelet[736]: E0927 01:41:04.679852     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.124391  756367 logs.go:138] Found kubelet problem: Sep 27 01:41:11 old-k8s-version-745133 kubelet[736]: E0927 01:41:11.121508     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.124607  756367 logs.go:138] Found kubelet problem: Sep 27 01:41:17 old-k8s-version-745133 kubelet[736]: E0927 01:41:17.676076     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.124961  756367 logs.go:138] Found kubelet problem: Sep 27 01:41:19 old-k8s-version-745133 kubelet[736]: E0927 01:41:19.607004     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.125167  756367 logs.go:138] Found kubelet problem: Sep 27 01:41:28 old-k8s-version-745133 kubelet[736]: E0927 01:41:28.675474     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.125540  756367 logs.go:138] Found kubelet problem: Sep 27 01:41:33 old-k8s-version-745133 kubelet[736]: E0927 01:41:33.674670     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.125751  756367 logs.go:138] Found kubelet problem: Sep 27 01:41:41 old-k8s-version-745133 kubelet[736]: E0927 01:41:41.675136     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.126107  756367 logs.go:138] Found kubelet problem: Sep 27 01:41:46 old-k8s-version-745133 kubelet[736]: E0927 01:41:46.674672     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.128445  756367 logs.go:138] Found kubelet problem: Sep 27 01:41:55 old-k8s-version-745133 kubelet[736]: E0927 01:41:55.687901     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	W0927 01:44:40.128841  756367 logs.go:138] Found kubelet problem: Sep 27 01:42:01 old-k8s-version-745133 kubelet[736]: E0927 01:42:01.675876     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.129096  756367 logs.go:138] Found kubelet problem: Sep 27 01:42:10 old-k8s-version-745133 kubelet[736]: E0927 01:42:10.675343     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.129444  756367 logs.go:138] Found kubelet problem: Sep 27 01:42:13 old-k8s-version-745133 kubelet[736]: E0927 01:42:13.675175     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.129653  756367 logs.go:138] Found kubelet problem: Sep 27 01:42:22 old-k8s-version-745133 kubelet[736]: E0927 01:42:22.675195     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.130012  756367 logs.go:138] Found kubelet problem: Sep 27 01:42:26 old-k8s-version-745133 kubelet[736]: E0927 01:42:26.674767     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.130224  756367 logs.go:138] Found kubelet problem: Sep 27 01:42:33 old-k8s-version-745133 kubelet[736]: E0927 01:42:33.675487     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.130896  756367 logs.go:138] Found kubelet problem: Sep 27 01:42:42 old-k8s-version-745133 kubelet[736]: E0927 01:42:42.264610     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.131105  756367 logs.go:138] Found kubelet problem: Sep 27 01:42:47 old-k8s-version-745133 kubelet[736]: E0927 01:42:47.675472     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.131468  756367 logs.go:138] Found kubelet problem: Sep 27 01:42:49 old-k8s-version-745133 kubelet[736]: E0927 01:42:49.615148     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.131829  756367 logs.go:138] Found kubelet problem: Sep 27 01:43:01 old-k8s-version-745133 kubelet[736]: E0927 01:43:01.674781     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.132045  756367 logs.go:138] Found kubelet problem: Sep 27 01:43:01 old-k8s-version-745133 kubelet[736]: E0927 01:43:01.675864     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.132240  756367 logs.go:138] Found kubelet problem: Sep 27 01:43:13 old-k8s-version-745133 kubelet[736]: E0927 01:43:13.675722     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.132601  756367 logs.go:138] Found kubelet problem: Sep 27 01:43:15 old-k8s-version-745133 kubelet[736]: E0927 01:43:15.674635     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.132997  756367 logs.go:138] Found kubelet problem: Sep 27 01:43:26 old-k8s-version-745133 kubelet[736]: E0927 01:43:26.674663     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.133196  756367 logs.go:138] Found kubelet problem: Sep 27 01:43:28 old-k8s-version-745133 kubelet[736]: E0927 01:43:28.675242     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.133410  756367 logs.go:138] Found kubelet problem: Sep 27 01:43:39 old-k8s-version-745133 kubelet[736]: E0927 01:43:39.675681     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.133777  756367 logs.go:138] Found kubelet problem: Sep 27 01:43:40 old-k8s-version-745133 kubelet[736]: E0927 01:43:40.674684     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.134273  756367 logs.go:138] Found kubelet problem: Sep 27 01:43:50 old-k8s-version-745133 kubelet[736]: E0927 01:43:50.675166     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.134641  756367 logs.go:138] Found kubelet problem: Sep 27 01:43:54 old-k8s-version-745133 kubelet[736]: E0927 01:43:54.675191     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.134858  756367 logs.go:138] Found kubelet problem: Sep 27 01:44:02 old-k8s-version-745133 kubelet[736]: E0927 01:44:02.676623     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.135214  756367 logs.go:138] Found kubelet problem: Sep 27 01:44:06 old-k8s-version-745133 kubelet[736]: E0927 01:44:06.674628     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.135431  756367 logs.go:138] Found kubelet problem: Sep 27 01:44:17 old-k8s-version-745133 kubelet[736]: E0927 01:44:17.675590     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.135825  756367 logs.go:138] Found kubelet problem: Sep 27 01:44:21 old-k8s-version-745133 kubelet[736]: E0927 01:44:21.674844     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.136054  756367 logs.go:138] Found kubelet problem: Sep 27 01:44:30 old-k8s-version-745133 kubelet[736]: E0927 01:44:30.675344     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.136427  756367 logs.go:138] Found kubelet problem: Sep 27 01:44:34 old-k8s-version-745133 kubelet[736]: E0927 01:44:34.674650     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	I0927 01:44:40.136460  756367 logs.go:123] Gathering logs for describe nodes ...
	I0927 01:44:40.136492  756367 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0927 01:44:40.285959  756367 out.go:358] Setting ErrFile to fd 2...
	I0927 01:44:40.285985  756367 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0927 01:44:40.286068  756367 out.go:270] X Problems detected in kubelet:
	W0927 01:44:40.286083  756367 out.go:270]   Sep 27 01:44:06 old-k8s-version-745133 kubelet[736]: E0927 01:44:06.674628     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.286120  756367 out.go:270]   Sep 27 01:44:17 old-k8s-version-745133 kubelet[736]: E0927 01:44:17.675590     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.286137  756367 out.go:270]   Sep 27 01:44:21 old-k8s-version-745133 kubelet[736]: E0927 01:44:21.674844     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	W0927 01:44:40.286143  756367 out.go:270]   Sep 27 01:44:30 old-k8s-version-745133 kubelet[736]: E0927 01:44:30.675344     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0927 01:44:40.286152  756367 out.go:270]   Sep 27 01:44:34 old-k8s-version-745133 kubelet[736]: E0927 01:44:34.674650     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	I0927 01:44:40.286163  756367 out.go:358] Setting ErrFile to fd 2...
	I0927 01:44:40.286170  756367 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:44:50.288102  756367 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0927 01:44:50.299610  756367 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0927 01:44:50.302485  756367 out.go:201] 
	W0927 01:44:50.305011  756367 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0927 01:44:50.305047  756367 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0927 01:44:50.305076  756367 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0927 01:44:50.305085  756367 out.go:270] * 
	W0927 01:44:50.305893  756367 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 01:44:50.309681  756367 out.go:201] 
	
	
	==> CRI-O <==
	Sep 27 01:42:42 old-k8s-version-745133 crio[625]: time="2024-09-27 01:42:42.291101335Z" level=info msg="Removed container 8b165d0197ab29202768f3aacc2c24d2d5ee336e652938d570e0b2ef3b035b54: kubernetes-dashboard/dashboard-metrics-scraper-8d5bb5db8-4qmkk/dashboard-metrics-scraper" id=d22a7734-c6b8-4c50-83f3-5f705b6d8f4c name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Sep 27 01:42:47 old-k8s-version-745133 crio[625]: time="2024-09-27 01:42:47.674962811Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=c4a73f3b-fc30-4d05-9c19-c263cbcfd92c name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 27 01:42:47 old-k8s-version-745133 crio[625]: time="2024-09-27 01:42:47.675182662Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=c4a73f3b-fc30-4d05-9c19-c263cbcfd92c name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 27 01:43:01 old-k8s-version-745133 crio[625]: time="2024-09-27 01:43:01.675324919Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=d814a018-8fe7-4551-9e3d-eedb76c75e91 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 27 01:43:01 old-k8s-version-745133 crio[625]: time="2024-09-27 01:43:01.675546601Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=d814a018-8fe7-4551-9e3d-eedb76c75e91 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 27 01:43:13 old-k8s-version-745133 crio[625]: time="2024-09-27 01:43:13.674970422Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=08cf5ad7-5c5d-478f-add5-70f996b5a43f name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 27 01:43:13 old-k8s-version-745133 crio[625]: time="2024-09-27 01:43:13.675192628Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=08cf5ad7-5c5d-478f-add5-70f996b5a43f name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 27 01:43:28 old-k8s-version-745133 crio[625]: time="2024-09-27 01:43:28.674697995Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=f66bdc77-7f02-4edf-b7f9-08ad1efdfa58 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 27 01:43:28 old-k8s-version-745133 crio[625]: time="2024-09-27 01:43:28.675024059Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=f66bdc77-7f02-4edf-b7f9-08ad1efdfa58 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 27 01:43:39 old-k8s-version-745133 crio[625]: time="2024-09-27 01:43:39.675271511Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=5ff32f30-30c8-4735-9e3e-3f40472ab285 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 27 01:43:39 old-k8s-version-745133 crio[625]: time="2024-09-27 01:43:39.675504573Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=5ff32f30-30c8-4735-9e3e-3f40472ab285 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 27 01:43:47 old-k8s-version-745133 crio[625]: time="2024-09-27 01:43:47.603515023Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=788a5ab6-2ca0-429d-b2db-9b261a07ba1d name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 27 01:43:47 old-k8s-version-745133 crio[625]: time="2024-09-27 01:43:47.603750152Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c,RepoTags:[k8s.gcr.io/pause:3.2 registry.k8s.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:31d3efd12022ffeffb3146bc10ae8beb890c80ed2f07363515580add7ed47636 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f registry.k8s.io/pause@sha256:31d3efd12022ffeffb3146bc10ae8beb890c80ed2f07363515580add7ed47636 registry.k8s.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:489397,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=788a5ab6-2ca0-429d-b2db-9b261a07ba1d name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 27 01:43:50 old-k8s-version-745133 crio[625]: time="2024-09-27 01:43:50.674696258Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=29df4f6d-90f1-4bfd-8a57-203f623f216a name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 27 01:43:50 old-k8s-version-745133 crio[625]: time="2024-09-27 01:43:50.674952957Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=29df4f6d-90f1-4bfd-8a57-203f623f216a name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 27 01:44:02 old-k8s-version-745133 crio[625]: time="2024-09-27 01:44:02.675993589Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=331d73fd-61cf-4c24-b4f4-162867c4d4b3 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 27 01:44:02 old-k8s-version-745133 crio[625]: time="2024-09-27 01:44:02.676224049Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=331d73fd-61cf-4c24-b4f4-162867c4d4b3 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 27 01:44:17 old-k8s-version-745133 crio[625]: time="2024-09-27 01:44:17.675015047Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=078141c4-49fb-4ade-b6e4-98d653b4dc91 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 27 01:44:17 old-k8s-version-745133 crio[625]: time="2024-09-27 01:44:17.675241200Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=078141c4-49fb-4ade-b6e4-98d653b4dc91 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 27 01:44:30 old-k8s-version-745133 crio[625]: time="2024-09-27 01:44:30.674670072Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=54910b99-0d61-444a-b067-1694609a6b63 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 27 01:44:30 old-k8s-version-745133 crio[625]: time="2024-09-27 01:44:30.674928372Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=54910b99-0d61-444a-b067-1694609a6b63 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 27 01:44:41 old-k8s-version-745133 crio[625]: time="2024-09-27 01:44:41.674747977Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=56a6e5ce-a564-45f7-ac3e-729e6c451922 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 27 01:44:41 old-k8s-version-745133 crio[625]: time="2024-09-27 01:44:41.674992033Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=56a6e5ce-a564-45f7-ac3e-729e6c451922 name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 27 01:44:41 old-k8s-version-745133 crio[625]: time="2024-09-27 01:44:41.675750719Z" level=info msg="Pulling image: fake.domain/registry.k8s.io/echoserver:1.4" id=1c6aef03-11c6-4d0a-a0df-d8c89a8843bf name=/runtime.v1alpha2.ImageService/PullImage
	Sep 27 01:44:41 old-k8s-version-745133 crio[625]: time="2024-09-27 01:44:41.679941139Z" level=info msg="Trying to access \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	054448306ce8e       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           2 minutes ago       Exited              dashboard-metrics-scraper   5                   06a3ff2d4d9cb       dashboard-metrics-scraper-8d5bb5db8-4qmkk
	7a82c31e29814       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93   5 minutes ago       Running             kubernetes-dashboard        0                   bf391190b01eb       kubernetes-dashboard-cd95d586-msqw2
	f74f5a74223b6       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51                                           5 minutes ago       Running             kindnet-cni                 0                   29d20811ca7d5       kindnet-84442
	138c41837cf0b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           5 minutes ago       Running             storage-provisioner         0                   1b2a43663ab48       storage-provisioner
	1f91c6f77f281       25a5233254979d0678a2db1d15b76b73dc380d81bc5eed93916ba5638b3cd894                                           5 minutes ago       Running             kube-proxy                  0                   519dbedf6aa38       kube-proxy-tvwdl
	6a28aab707000       db91994f4ee8f894a1e8a6c1a76f615da8fc3c019300a3686291ce6fcbc57895                                           5 minutes ago       Running             coredns                     0                   0b6220f8343e0       coredns-74ff55c5b-drjjb
	13e4e39e2cd19       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           5 minutes ago       Running             busybox                     0                   c98a6e89a03ee       busybox
	728631fe1253b       2c08bbbc02d3aa5dfbf4e79f15c0a61424049288917aa10364464ca1f7de7157                                           6 minutes ago       Running             kube-apiserver              0                   bc24ca0fe0505       kube-apiserver-old-k8s-version-745133
	fd21817034d06       1df8a2b116bd16f7070fd383a6769c8d644b365575e8ffa3e492b84e4f05fc74                                           6 minutes ago       Running             kube-controller-manager     0                   62b86f5237685       kube-controller-manager-old-k8s-version-745133
	fbfae056e26dd       05b738aa1bc6355db8a2ee8639f3631b908286e43f584a3d2ee0c472de033c28                                           6 minutes ago       Running             etcd                        0                   3a761aaccec43       etcd-old-k8s-version-745133
	1399d95796260       e7605f88f17d6a4c3f083ef9c6f5f19b39f87e4d4406a05a8612b54a6ea57051                                           6 minutes ago       Running             kube-scheduler              0                   d8b6e6a69f8f0       kube-scheduler-old-k8s-version-745133
	
	
	==> coredns [6a28aab70700044c53668d4eceed46b67fba50e2d134c97b4c5cdd9f83c81e4a] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:32897 - 63404 "HINFO IN 854050115976780122.743714874456846604. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.04926594s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:41906 - 3296 "HINFO IN 6266410673912259047.9184460476635664408. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014385437s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I0927 01:39:32.634470       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-09-27 01:39:02.633835143 +0000 UTC m=+0.122281414) (total time: 30.000508922s):
	Trace[2019727887]: [30.000508922s] [30.000508922s] END
	E0927 01:39:32.634585       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0927 01:39:32.634756       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-09-27 01:39:02.634432438 +0000 UTC m=+0.122878701) (total time: 30.000270363s):
	Trace[939984059]: [30.000270363s] [30.000270363s] END
	E0927 01:39:32.634794       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0927 01:39:32.641471       1 trace.go:116] Trace[1474941318]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-09-27 01:39:02.634691706 +0000 UTC m=+0.123137969) (total time: 30.006753813s):
	Trace[1474941318]: [30.006753813s] [30.006753813s] END
	E0927 01:39:32.641491       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-745133
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-745133
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=old-k8s-version-745133
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_27T01_36_13_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 01:36:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-745133
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 01:44:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 01:39:51 +0000   Fri, 27 Sep 2024 01:36:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 01:39:51 +0000   Fri, 27 Sep 2024 01:36:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 01:39:51 +0000   Fri, 27 Sep 2024 01:36:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 01:39:51 +0000   Fri, 27 Sep 2024 01:36:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-745133
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 db5b11027cb742ffb6640c5adfb52596
	  System UUID:                a2ec7984-32b1-4e5d-9f7e-bed2abb3c864
	  Boot ID:                    7df4580f-f941-474d-8050-3bbd7f78d321
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m49s
	  kube-system                 coredns-74ff55c5b-drjjb                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m23s
	  kube-system                 etcd-old-k8s-version-745133                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m31s
	  kube-system                 kindnet-84442                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m23s
	  kube-system                 kube-apiserver-old-k8s-version-745133             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m31s
	  kube-system                 kube-controller-manager-old-k8s-version-745133    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m31s
	  kube-system                 kube-proxy-tvwdl                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m23s
	  kube-system                 kube-scheduler-old-k8s-version-745133             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m31s
	  kube-system                 metrics-server-9975d5f86-5bphl                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m35s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m22s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-4qmkk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m33s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-msqw2               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  Starting                 8m32s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m32s                kubelet     Node old-k8s-version-745133 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m32s                kubelet     Node old-k8s-version-745133 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m32s                kubelet     Node old-k8s-version-745133 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m22s                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                8m1s                 kubelet     Node old-k8s-version-745133 status is now: NodeReady
	  Normal  Starting                 6m5s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m5s (x8 over 6m5s)  kubelet     Node old-k8s-version-745133 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m5s (x8 over 6m5s)  kubelet     Node old-k8s-version-745133 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m5s (x8 over 6m5s)  kubelet     Node old-k8s-version-745133 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m49s                kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Sep27 00:06] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	[Sep27 01:31] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [fbfae056e26dd1b15e6b109b732296bcb89c5db7fabc0f8958574a0fc1248e81] <==
	2024-09-27 01:40:48.968826 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 01:40:58.968961 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 01:41:08.968854 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 01:41:18.968967 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 01:41:28.969025 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 01:41:38.968941 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 01:41:48.968769 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 01:41:58.968833 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 01:42:08.968833 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 01:42:18.968850 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 01:42:28.968786 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 01:42:38.968848 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 01:42:48.968836 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 01:42:58.968895 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 01:43:08.968829 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 01:43:18.968826 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 01:43:28.968748 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 01:43:38.968835 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 01:43:48.968852 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 01:43:58.968809 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 01:44:08.970905 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 01:44:18.968713 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 01:44:28.968885 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 01:44:38.969218 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-27 01:44:48.968869 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 01:44:52 up  5:27,  0 users,  load average: 0.70, 1.66, 2.12
	Linux old-k8s-version-745133 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [f74f5a74223b633c3ca99ac7465604311677c0c0b43adeacf4843a7ade66ca48] <==
	I0927 01:42:44.720807       1 main.go:299] handling current node
	I0927 01:42:54.721918       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0927 01:42:54.721957       1 main.go:299] handling current node
	I0927 01:43:04.720447       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0927 01:43:04.720584       1 main.go:299] handling current node
	I0927 01:43:14.722427       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0927 01:43:14.722459       1 main.go:299] handling current node
	I0927 01:43:24.728434       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0927 01:43:24.728470       1 main.go:299] handling current node
	I0927 01:43:34.726836       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0927 01:43:34.726869       1 main.go:299] handling current node
	I0927 01:43:44.727661       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0927 01:43:44.727697       1 main.go:299] handling current node
	I0927 01:43:54.722850       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0927 01:43:54.722885       1 main.go:299] handling current node
	I0927 01:44:04.720188       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0927 01:44:04.720302       1 main.go:299] handling current node
	I0927 01:44:14.726788       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0927 01:44:14.726822       1 main.go:299] handling current node
	I0927 01:44:24.728498       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0927 01:44:24.728530       1 main.go:299] handling current node
	I0927 01:44:34.720282       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0927 01:44:34.720395       1 main.go:299] handling current node
	I0927 01:44:44.726636       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0927 01:44:44.726763       1 main.go:299] handling current node
	
	
	==> kube-apiserver [728631fe1253bab9992c7a58f88fca5a34491a3d06d1a6601e0e70566e7d10f4] <==
	I0927 01:41:10.973342       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0927 01:41:10.973351       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0927 01:41:49.850778       1 client.go:360] parsed scheme: "passthrough"
	I0927 01:41:49.850822       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0927 01:41:49.850831       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0927 01:42:03.407866       1 handler_proxy.go:102] no RequestInfo found in the context
	E0927 01:42:03.407941       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0927 01:42:03.407949       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0927 01:42:32.364973       1 client.go:360] parsed scheme: "passthrough"
	I0927 01:42:32.365014       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0927 01:42:32.365025       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0927 01:43:15.541835       1 client.go:360] parsed scheme: "passthrough"
	I0927 01:43:15.541901       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0927 01:43:15.541911       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0927 01:43:53.091050       1 client.go:360] parsed scheme: "passthrough"
	I0927 01:43:53.091094       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0927 01:43:53.091106       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0927 01:44:01.298691       1 handler_proxy.go:102] no RequestInfo found in the context
	E0927 01:44:01.298804       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0927 01:44:01.298865       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0927 01:44:26.575613       1 client.go:360] parsed scheme: "passthrough"
	I0927 01:44:26.575656       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0927 01:44:26.575665       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [fd21817034d06ff8ec30b1fa5089ddfd190af71c8dfdcde0d32fd61181caafaa] <==
	W0927 01:40:27.681576       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0927 01:40:51.523962       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0927 01:40:59.332910       1 request.go:655] Throttling request took 1.048411631s, request: GET:https://192.168.76.2:8443/apis/storage.k8s.io/v1beta1?timeout=32s
	W0927 01:41:00.184555       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0927 01:41:22.025801       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0927 01:41:31.834985       1 request.go:655] Throttling request took 1.048271669s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0927 01:41:32.686390       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0927 01:41:52.527573       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0927 01:42:04.336811       1 request.go:655] Throttling request took 1.048305696s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0927 01:42:05.188220       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0927 01:42:23.029445       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0927 01:42:36.838648       1 request.go:655] Throttling request took 1.048556723s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1beta1?timeout=32s
	W0927 01:42:37.690162       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0927 01:42:53.531196       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0927 01:43:09.340581       1 request.go:655] Throttling request took 1.048476502s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
	W0927 01:43:10.192018       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0927 01:43:24.033128       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0927 01:43:41.842412       1 request.go:655] Throttling request took 1.048312355s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0927 01:43:42.693781       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0927 01:43:54.534919       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0927 01:44:14.387681       1 request.go:655] Throttling request took 1.048299995s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0927 01:44:15.239263       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0927 01:44:25.036789       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0927 01:44:46.889692       1 request.go:655] Throttling request took 1.048409135s, request: GET:https://192.168.76.2:8443/apis/authentication.k8s.io/v1?timeout=32s
	W0927 01:44:47.741161       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-proxy [1f91c6f77f281dbe05880609e73acc353cb7ee468afa455c7d22c45e9428661e] <==
	I0927 01:36:30.733279       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0927 01:36:30.735110       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0927 01:36:30.798859       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0927 01:36:30.811340       1 server_others.go:185] Using iptables Proxier.
	I0927 01:36:30.843199       1 server.go:650] Version: v1.20.0
	I0927 01:36:30.843939       1 config.go:315] Starting service config controller
	I0927 01:36:30.844010       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0927 01:36:30.970476       1 config.go:224] Starting endpoint slice config controller
	I0927 01:36:30.970500       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0927 01:36:30.971335       1 shared_informer.go:247] Caches are synced for service config 
	I0927 01:36:31.072754       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0927 01:39:03.500144       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0927 01:39:03.500390       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0927 01:39:03.519540       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0927 01:39:03.519765       1 server_others.go:185] Using iptables Proxier.
	I0927 01:39:03.520036       1 server.go:650] Version: v1.20.0
	I0927 01:39:03.520876       1 config.go:315] Starting service config controller
	I0927 01:39:03.520942       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0927 01:39:03.521035       1 config.go:224] Starting endpoint slice config controller
	I0927 01:39:03.521071       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0927 01:39:03.622774       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0927 01:39:03.622963       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [1399d95796260c7e27b7a91a70576c5a3e3bfcfee9fb91839ddfe1b01c5114c0] <==
	E0927 01:36:09.439125       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0927 01:36:09.439208       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0927 01:36:09.439412       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0927 01:36:09.440281       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0927 01:36:10.263218       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0927 01:36:10.271660       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0927 01:36:10.297247       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0927 01:36:10.359392       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0927 01:36:10.373721       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0927 01:36:10.390618       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0927 01:36:10.446520       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0927 01:36:10.513622       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0927 01:36:13.035444       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0927 01:38:53.861068       1 serving.go:331] Generated self-signed cert in-memory
	I0927 01:39:00.782063       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0927 01:39:00.783997       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0927 01:39:00.784134       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0927 01:39:00.866246       1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0927 01:39:00.784146       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0927 01:39:00.866352       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0927 01:39:00.784156       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0927 01:39:00.881764       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0927 01:39:00.970941       1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController 
	I0927 01:39:00.972190       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0927 01:39:00.982431       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 
	
	
	==> kubelet <==
	Sep 27 01:43:26 old-k8s-version-745133 kubelet[736]: I0927 01:43:26.674295     736 scope.go:95] [topologymanager] RemoveContainer - Container ID: 054448306ce8e579034f52bb9851e525f2d5f2d5e52282bf77a6cbf74299bde7
	Sep 27 01:43:26 old-k8s-version-745133 kubelet[736]: E0927 01:43:26.674663     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	Sep 27 01:43:28 old-k8s-version-745133 kubelet[736]: E0927 01:43:28.675242     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 27 01:43:39 old-k8s-version-745133 kubelet[736]: E0927 01:43:39.675681     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 27 01:43:40 old-k8s-version-745133 kubelet[736]: I0927 01:43:40.674283     736 scope.go:95] [topologymanager] RemoveContainer - Container ID: 054448306ce8e579034f52bb9851e525f2d5f2d5e52282bf77a6cbf74299bde7
	Sep 27 01:43:40 old-k8s-version-745133 kubelet[736]: E0927 01:43:40.674684     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	Sep 27 01:43:47 old-k8s-version-745133 kubelet[736]: E0927 01:43:47.655568     736 container_manager_linux.go:533] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /docker/1fb14722efe43a080319cb455e783513aeccc71eb22ae6ffe2a2fad7eb054cbd, memory: /docker/1fb14722efe43a080319cb455e783513aeccc71eb22ae6ffe2a2fad7eb054cbd/system.slice/kubelet.service
	Sep 27 01:43:50 old-k8s-version-745133 kubelet[736]: E0927 01:43:50.675166     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 27 01:43:54 old-k8s-version-745133 kubelet[736]: I0927 01:43:54.674330     736 scope.go:95] [topologymanager] RemoveContainer - Container ID: 054448306ce8e579034f52bb9851e525f2d5f2d5e52282bf77a6cbf74299bde7
	Sep 27 01:43:54 old-k8s-version-745133 kubelet[736]: E0927 01:43:54.675191     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	Sep 27 01:44:02 old-k8s-version-745133 kubelet[736]: E0927 01:44:02.676623     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 27 01:44:06 old-k8s-version-745133 kubelet[736]: I0927 01:44:06.674294     736 scope.go:95] [topologymanager] RemoveContainer - Container ID: 054448306ce8e579034f52bb9851e525f2d5f2d5e52282bf77a6cbf74299bde7
	Sep 27 01:44:06 old-k8s-version-745133 kubelet[736]: E0927 01:44:06.674628     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	Sep 27 01:44:17 old-k8s-version-745133 kubelet[736]: E0927 01:44:17.675590     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 27 01:44:21 old-k8s-version-745133 kubelet[736]: I0927 01:44:21.674489     736 scope.go:95] [topologymanager] RemoveContainer - Container ID: 054448306ce8e579034f52bb9851e525f2d5f2d5e52282bf77a6cbf74299bde7
	Sep 27 01:44:21 old-k8s-version-745133 kubelet[736]: E0927 01:44:21.674844     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	Sep 27 01:44:30 old-k8s-version-745133 kubelet[736]: E0927 01:44:30.675344     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 27 01:44:34 old-k8s-version-745133 kubelet[736]: I0927 01:44:34.674283     736 scope.go:95] [topologymanager] RemoveContainer - Container ID: 054448306ce8e579034f52bb9851e525f2d5f2d5e52282bf77a6cbf74299bde7
	Sep 27 01:44:34 old-k8s-version-745133 kubelet[736]: E0927 01:44:34.674650     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	Sep 27 01:44:41 old-k8s-version-745133 kubelet[736]: E0927 01:44:41.689257     736 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Sep 27 01:44:41 old-k8s-version-745133 kubelet[736]: E0927 01:44:41.689309     736 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Sep 27 01:44:41 old-k8s-version-745133 kubelet[736]: E0927 01:44:41.689461     736 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-9xfpw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-5bphl_kube-system(249f17e
8-637b-4946-8ed3-edff6860c82b): ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Sep 27 01:44:41 old-k8s-version-745133 kubelet[736]: E0927 01:44:41.689488     736 pod_workers.go:191] Error syncing pod 249f17e8-637b-4946-8ed3-edff6860c82b ("metrics-server-9975d5f86-5bphl_kube-system(249f17e8-637b-4946-8ed3-edff6860c82b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Sep 27 01:44:48 old-k8s-version-745133 kubelet[736]: I0927 01:44:48.674243     736 scope.go:95] [topologymanager] RemoveContainer - Container ID: 054448306ce8e579034f52bb9851e525f2d5f2d5e52282bf77a6cbf74299bde7
	Sep 27 01:44:48 old-k8s-version-745133 kubelet[736]: E0927 01:44:48.674591     736 pod_workers.go:191] Error syncing pod 989e3cab-2430-4d81-91f7-b0098b27e30c ("dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4qmkk_kubernetes-dashboard(989e3cab-2430-4d81-91f7-b0098b27e30c)"
	
	
	==> kubernetes-dashboard [7a82c31e2981413dba52726ae93a9b6786429d4055c821906975f6a58bb6787c] <==
	2024/09/27 01:39:25 Using namespace: kubernetes-dashboard
	2024/09/27 01:39:25 Using in-cluster config to connect to apiserver
	2024/09/27 01:39:25 Using secret token for csrf signing
	2024/09/27 01:39:25 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/27 01:39:25 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/27 01:39:25 Successful initial request to the apiserver, version: v1.20.0
	2024/09/27 01:39:25 Generating JWE encryption key
	2024/09/27 01:39:25 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/27 01:39:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/27 01:39:26 Initializing JWE encryption key from synchronized object
	2024/09/27 01:39:26 Creating in-cluster Sidecar client
	2024/09/27 01:39:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/27 01:39:26 Serving insecurely on HTTP port: 9090
	2024/09/27 01:39:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/27 01:40:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/27 01:40:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/27 01:41:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/27 01:41:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/27 01:42:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/27 01:42:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/27 01:43:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/27 01:43:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/27 01:44:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/27 01:39:25 Starting overwatch
	
	
	==> storage-provisioner [138c41837cf0bbda88ce0b41c3d306956ef57ff9c2cb680fa71af1f16e609832] <==
	I0927 01:36:55.793059       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0927 01:36:55.811664       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0927 01:36:55.811777       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0927 01:36:55.835835       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0927 01:36:55.835993       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-745133_0238d71c-6d61-45e2-8a17-8402f2337631!
	I0927 01:36:55.836233       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fb63fc24-0025-4bfe-9bb1-d7dbe0da4177", APIVersion:"v1", ResourceVersion:"480", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-745133_0238d71c-6d61-45e2-8a17-8402f2337631 became leader
	I0927 01:36:55.936895       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-745133_0238d71c-6d61-45e2-8a17-8402f2337631!
	I0927 01:39:04.282554       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0927 01:39:04.295440       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0927 01:39:04.295581       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0927 01:39:21.751944       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0927 01:39:21.752395       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fb63fc24-0025-4bfe-9bb1-d7dbe0da4177", APIVersion:"v1", ResourceVersion:"751", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-745133_3fa9050b-8ba5-42b3-9684-551107498665 became leader
	I0927 01:39:21.752502       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-745133_3fa9050b-8ba5-42b3-9684-551107498665!
	I0927 01:39:21.853569       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-745133_3fa9050b-8ba5-42b3-9684-551107498665!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-745133 -n old-k8s-version-745133
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-745133 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-5bphl
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-745133 describe pod metrics-server-9975d5f86-5bphl
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-745133 describe pod metrics-server-9975d5f86-5bphl: exit status 1 (126.989817ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-5bphl" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-745133 describe pod metrics-server-9975d5f86-5bphl: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (383.01s)

                                                
                                    

Test pass (294/327)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.76
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.19
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 6.35
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.2
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.56
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 215.91
31 TestAddons/serial/GCPAuth/Namespaces 0.23
35 TestAddons/parallel/InspektorGadget 10.75
38 TestAddons/parallel/CSI 51.82
39 TestAddons/parallel/Headlamp 16.66
40 TestAddons/parallel/CloudSpanner 6.54
41 TestAddons/parallel/LocalPath 10.31
42 TestAddons/parallel/NvidiaDevicePlugin 6.49
43 TestAddons/parallel/Yakd 11.7
44 TestAddons/StoppedEnableDisable 12.12
45 TestCertOptions 38.26
46 TestCertExpiration 239.04
48 TestForceSystemdFlag 36.37
49 TestForceSystemdEnv 40.78
55 TestErrorSpam/setup 33.4
56 TestErrorSpam/start 0.68
57 TestErrorSpam/status 1.06
58 TestErrorSpam/pause 1.72
59 TestErrorSpam/unpause 1.73
60 TestErrorSpam/stop 1.46
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 45.85
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 42.08
67 TestFunctional/serial/KubeContext 0.07
68 TestFunctional/serial/KubectlGetPods 0.09
71 TestFunctional/serial/CacheCmd/cache/add_remote 4.43
72 TestFunctional/serial/CacheCmd/cache/add_local 1.37
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
74 TestFunctional/serial/CacheCmd/cache/list 0.06
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
76 TestFunctional/serial/CacheCmd/cache/cache_reload 2.11
77 TestFunctional/serial/CacheCmd/cache/delete 0.11
78 TestFunctional/serial/MinikubeKubectlCmd 0.13
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
80 TestFunctional/serial/ExtraConfig 30.8
81 TestFunctional/serial/ComponentHealth 0.1
82 TestFunctional/serial/LogsCmd 1.65
83 TestFunctional/serial/LogsFileCmd 1.65
84 TestFunctional/serial/InvalidService 4.28
86 TestFunctional/parallel/ConfigCmd 0.4
87 TestFunctional/parallel/DashboardCmd 10.32
88 TestFunctional/parallel/DryRun 0.42
89 TestFunctional/parallel/InternationalLanguage 0.18
90 TestFunctional/parallel/StatusCmd 0.97
94 TestFunctional/parallel/ServiceCmdConnect 10.68
95 TestFunctional/parallel/AddonsCmd 0.19
96 TestFunctional/parallel/PersistentVolumeClaim 23.92
98 TestFunctional/parallel/SSHCmd 0.67
99 TestFunctional/parallel/CpCmd 2.28
101 TestFunctional/parallel/FileSync 0.35
102 TestFunctional/parallel/CertSync 1.91
106 TestFunctional/parallel/NodeLabels 0.11
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.59
110 TestFunctional/parallel/License 0.3
112 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.61
113 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
115 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.47
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.14
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
121 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
122 TestFunctional/parallel/ServiceCmd/DeployApp 6.22
123 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
124 TestFunctional/parallel/ProfileCmd/profile_list 0.42
125 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
126 TestFunctional/parallel/MountCmd/any-port 7.92
127 TestFunctional/parallel/ServiceCmd/List 0.61
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.56
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.36
130 TestFunctional/parallel/ServiceCmd/Format 0.38
131 TestFunctional/parallel/ServiceCmd/URL 0.34
132 TestFunctional/parallel/MountCmd/specific-port 2.31
133 TestFunctional/parallel/MountCmd/VerifyCleanup 2.49
134 TestFunctional/parallel/Version/short 0.08
135 TestFunctional/parallel/Version/components 0.96
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
140 TestFunctional/parallel/ImageCommands/ImageBuild 3.63
141 TestFunctional/parallel/ImageCommands/Setup 0.79
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.48
143 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
144 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.17
145 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.05
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.31
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.59
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.65
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.81
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.57
152 TestFunctional/delete_echo-server_images 0.04
153 TestFunctional/delete_my-image_image 0.01
154 TestFunctional/delete_minikube_cached_images 0.02
158 TestMultiControlPlane/serial/StartCluster 175.54
159 TestMultiControlPlane/serial/DeployApp 8.41
160 TestMultiControlPlane/serial/PingHostFromPods 1.56
161 TestMultiControlPlane/serial/AddWorkerNode 32.8
162 TestMultiControlPlane/serial/NodeLabels 0.12
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.95
164 TestMultiControlPlane/serial/CopyFile 17.81
165 TestMultiControlPlane/serial/StopSecondaryNode 12.69
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.74
167 TestMultiControlPlane/serial/RestartSecondaryNode 23.01
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.38
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 207.09
170 TestMultiControlPlane/serial/DeleteSecondaryNode 12.23
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.72
172 TestMultiControlPlane/serial/StopCluster 35.81
173 TestMultiControlPlane/serial/RestartCluster 117.99
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.7
175 TestMultiControlPlane/serial/AddSecondaryNode 70.51
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.96
180 TestJSONOutput/start/Command 80.39
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/pause/Command 0.71
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/unpause/Command 0.64
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 5.87
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.2
205 TestKicCustomNetwork/create_custom_network 38.63
206 TestKicCustomNetwork/use_default_bridge_network 34.42
207 TestKicExistingNetwork 32
208 TestKicCustomSubnet 32.64
209 TestKicStaticIP 33.26
210 TestMainNoArgs 0.05
211 TestMinikubeProfile 65.65
214 TestMountStart/serial/StartWithMountFirst 6.56
215 TestMountStart/serial/VerifyMountFirst 0.25
216 TestMountStart/serial/StartWithMountSecond 6.47
217 TestMountStart/serial/VerifyMountSecond 0.25
218 TestMountStart/serial/DeleteFirst 1.6
219 TestMountStart/serial/VerifyMountPostDelete 0.25
220 TestMountStart/serial/Stop 1.21
221 TestMountStart/serial/RestartStopped 7.6
222 TestMountStart/serial/VerifyMountPostStop 0.25
225 TestMultiNode/serial/FreshStart2Nodes 135.69
226 TestMultiNode/serial/DeployApp2Nodes 6.37
227 TestMultiNode/serial/PingHostFrom2Pods 0.97
228 TestMultiNode/serial/AddNode 58.35
229 TestMultiNode/serial/MultiNodeLabels 0.09
230 TestMultiNode/serial/ProfileList 0.65
231 TestMultiNode/serial/CopyFile 9.4
232 TestMultiNode/serial/StopNode 2.17
233 TestMultiNode/serial/StartAfterStop 9.6
234 TestMultiNode/serial/RestartKeepsNodes 116.18
235 TestMultiNode/serial/DeleteNode 5.35
236 TestMultiNode/serial/StopMultiNode 23.84
237 TestMultiNode/serial/RestartMultiNode 54.31
238 TestMultiNode/serial/ValidateNameConflict 32.51
243 TestPreload 127.6
245 TestScheduledStopUnix 104.53
248 TestInsufficientStorage 10.14
249 TestRunningBinaryUpgrade 69.44
251 TestKubernetesUpgrade 391.14
252 TestMissingContainerUpgrade 163.34
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
255 TestNoKubernetes/serial/StartWithK8s 36.88
256 TestNoKubernetes/serial/StartWithStopK8s 7.74
257 TestNoKubernetes/serial/Start 8.3
258 TestNoKubernetes/serial/VerifyK8sNotRunning 0.43
259 TestNoKubernetes/serial/ProfileList 1.67
260 TestNoKubernetes/serial/Stop 1.24
261 TestNoKubernetes/serial/StartNoArgs 7.45
262 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.31
263 TestStoppedBinaryUpgrade/Setup 0.94
264 TestStoppedBinaryUpgrade/Upgrade 80.52
265 TestStoppedBinaryUpgrade/MinikubeLogs 1.09
274 TestPause/serial/Start 77.93
275 TestPause/serial/SecondStartNoReconfiguration 17.5
276 TestPause/serial/Pause 1.23
277 TestPause/serial/VerifyStatus 0.45
278 TestPause/serial/Unpause 1.24
279 TestPause/serial/PauseAgain 1.43
280 TestPause/serial/DeletePaused 2.95
281 TestPause/serial/VerifyDeletedResources 0.46
289 TestNetworkPlugins/group/false 5.33
294 TestStartStop/group/old-k8s-version/serial/FirstStart 164.64
295 TestStartStop/group/old-k8s-version/serial/DeployApp 11.92
297 TestStartStop/group/no-preload/serial/FirstStart 71.7
298 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.38
299 TestStartStop/group/old-k8s-version/serial/Stop 13.48
300 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.28
302 TestStartStop/group/no-preload/serial/DeployApp 11.44
303 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.15
304 TestStartStop/group/no-preload/serial/Stop 12.03
305 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
306 TestStartStop/group/no-preload/serial/SecondStart 289.74
307 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
308 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.13
309 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.3
310 TestStartStop/group/no-preload/serial/Pause 3.48
311 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
313 TestStartStop/group/embed-certs/serial/FirstStart 94.47
314 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.13
315 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.28
316 TestStartStop/group/old-k8s-version/serial/Pause 2.8
318 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 85.09
319 TestStartStop/group/embed-certs/serial/DeployApp 9.32
320 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.35
321 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.1
322 TestStartStop/group/embed-certs/serial/Stop 11.98
323 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.09
324 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.96
325 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
326 TestStartStop/group/embed-certs/serial/SecondStart 268.79
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
328 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 304.2
329 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
330 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
331 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
332 TestStartStop/group/embed-certs/serial/Pause 2.97
334 TestStartStop/group/newest-cni/serial/FirstStart 34.42
335 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
336 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.13
337 TestStartStop/group/newest-cni/serial/DeployApp 0
338 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.1
339 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.35
340 TestStartStop/group/newest-cni/serial/Stop 1.26
341 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.64
342 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.24
343 TestStartStop/group/newest-cni/serial/SecondStart 19.21
344 TestNetworkPlugins/group/auto/Start 57.9
345 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
346 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.32
348 TestStartStop/group/newest-cni/serial/Pause 3.91
349 TestNetworkPlugins/group/kindnet/Start 81.13
350 TestNetworkPlugins/group/auto/KubeletFlags 0.43
351 TestNetworkPlugins/group/auto/NetCatPod 9.33
352 TestNetworkPlugins/group/auto/DNS 0.19
353 TestNetworkPlugins/group/auto/Localhost 0.15
354 TestNetworkPlugins/group/auto/HairPin 0.16
355 TestNetworkPlugins/group/calico/Start 63.47
356 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.36
358 TestNetworkPlugins/group/kindnet/NetCatPod 12.37
359 TestNetworkPlugins/group/kindnet/DNS 0.22
360 TestNetworkPlugins/group/kindnet/Localhost 0.18
361 TestNetworkPlugins/group/kindnet/HairPin 0.23
362 TestNetworkPlugins/group/custom-flannel/Start 59.43
363 TestNetworkPlugins/group/calico/ControllerPod 6.01
364 TestNetworkPlugins/group/calico/KubeletFlags 0.4
365 TestNetworkPlugins/group/calico/NetCatPod 13.38
366 TestNetworkPlugins/group/calico/DNS 0.25
367 TestNetworkPlugins/group/calico/Localhost 0.24
368 TestNetworkPlugins/group/calico/HairPin 0.22
369 TestNetworkPlugins/group/enable-default-cni/Start 79.5
370 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.35
371 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.34
372 TestNetworkPlugins/group/custom-flannel/DNS 0.28
373 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
374 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
375 TestNetworkPlugins/group/flannel/Start 42.84
376 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
377 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.37
378 TestNetworkPlugins/group/flannel/ControllerPod 6.01
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
382 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
383 TestNetworkPlugins/group/flannel/NetCatPod 11.26
384 TestNetworkPlugins/group/flannel/DNS 0.23
385 TestNetworkPlugins/group/flannel/Localhost 0.2
386 TestNetworkPlugins/group/flannel/HairPin 0.21
387 TestNetworkPlugins/group/bridge/Start 74.03
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.26
389 TestNetworkPlugins/group/bridge/NetCatPod 11.3
390 TestNetworkPlugins/group/bridge/DNS 0.17
391 TestNetworkPlugins/group/bridge/Localhost 0.15
392 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.20.0/json-events (8.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-005398 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-005398 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (8.759873916s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0927 00:33:29.482314  559158 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0927 00:33:29.482394  559158 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19711-553751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-005398
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-005398: exit status 85 (62.605253ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-005398 | jenkins | v1.34.0 | 27 Sep 24 00:33 UTC |          |
	|         | -p download-only-005398        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 00:33:20
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 00:33:20.764641  559163 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:33:20.764824  559163 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:33:20.764860  559163 out.go:358] Setting ErrFile to fd 2...
	I0927 00:33:20.764881  559163 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:33:20.765156  559163 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-553751/.minikube/bin
	W0927 00:33:20.765331  559163 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19711-553751/.minikube/config/config.json: open /home/jenkins/minikube-integration/19711-553751/.minikube/config/config.json: no such file or directory
	I0927 00:33:20.765760  559163 out.go:352] Setting JSON to true
	I0927 00:33:20.766626  559163 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":15344,"bootTime":1727381857,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0927 00:33:20.766758  559163 start.go:139] virtualization:  
	I0927 00:33:20.769013  559163 out.go:97] [download-only-005398] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0927 00:33:20.769167  559163 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19711-553751/.minikube/cache/preloaded-tarball: no such file or directory
	I0927 00:33:20.769221  559163 notify.go:220] Checking for updates...
	I0927 00:33:20.770569  559163 out.go:169] MINIKUBE_LOCATION=19711
	I0927 00:33:20.771927  559163 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 00:33:20.773531  559163 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19711-553751/kubeconfig
	I0927 00:33:20.774837  559163 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-553751/.minikube
	I0927 00:33:20.776116  559163 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0927 00:33:20.779614  559163 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0927 00:33:20.779852  559163 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 00:33:20.800629  559163 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0927 00:33:20.800743  559163 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 00:33:20.856179  559163 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-27 00:33:20.845606432 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 00:33:20.856289  559163 docker.go:318] overlay module found
	I0927 00:33:20.858013  559163 out.go:97] Using the docker driver based on user configuration
	I0927 00:33:20.858035  559163 start.go:297] selected driver: docker
	I0927 00:33:20.858042  559163 start.go:901] validating driver "docker" against <nil>
	I0927 00:33:20.858147  559163 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 00:33:20.903716  559163 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-27 00:33:20.894269723 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 00:33:20.903936  559163 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 00:33:20.904228  559163 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0927 00:33:20.904388  559163 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0927 00:33:20.906333  559163 out.go:169] Using Docker driver with root privileges
	I0927 00:33:20.907933  559163 cni.go:84] Creating CNI manager for ""
	I0927 00:33:20.908010  559163 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0927 00:33:20.908023  559163 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0927 00:33:20.908107  559163 start.go:340] cluster config:
	{Name:download-only-005398 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-005398 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:33:20.909641  559163 out.go:97] Starting "download-only-005398" primary control-plane node in "download-only-005398" cluster
	I0927 00:33:20.909661  559163 cache.go:121] Beginning downloading kic base image for docker with crio
	I0927 00:33:20.911312  559163 out.go:97] Pulling base image v0.0.45-1727108449-19696 ...
	I0927 00:33:20.911339  559163 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0927 00:33:20.911448  559163 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0927 00:33:20.925909  559163 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0927 00:33:20.926115  559163 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory
	I0927 00:33:20.926211  559163 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0927 00:33:20.982470  559163 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0927 00:33:20.982510  559163 cache.go:56] Caching tarball of preloaded images
	I0927 00:33:20.982679  559163 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0927 00:33:20.984329  559163 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0927 00:33:20.984355  559163 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0927 00:33:21.070048  559163 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/19711-553751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-005398 host does not exist
	  To start a cluster, run: "minikube start -p download-only-005398"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-005398
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (6.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-763965 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-763965 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.353126815s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (6.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0927 00:33:36.223200  559158 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I0927 00:33:36.223236  559158 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19711-553751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-763965
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-763965: exit status 85 (61.220933ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-005398 | jenkins | v1.34.0 | 27 Sep 24 00:33 UTC |                     |
	|         | -p download-only-005398        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 27 Sep 24 00:33 UTC | 27 Sep 24 00:33 UTC |
	| delete  | -p download-only-005398        | download-only-005398 | jenkins | v1.34.0 | 27 Sep 24 00:33 UTC | 27 Sep 24 00:33 UTC |
	| start   | -o=json --download-only        | download-only-763965 | jenkins | v1.34.0 | 27 Sep 24 00:33 UTC |                     |
	|         | -p download-only-763965        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 00:33:29
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 00:33:29.913145  559367 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:33:29.913323  559367 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:33:29.913374  559367 out.go:358] Setting ErrFile to fd 2...
	I0927 00:33:29.913396  559367 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:33:29.913647  559367 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-553751/.minikube/bin
	I0927 00:33:29.914060  559367 out.go:352] Setting JSON to true
	I0927 00:33:29.914973  559367 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":15353,"bootTime":1727381857,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0927 00:33:29.915065  559367 start.go:139] virtualization:  
	I0927 00:33:29.916769  559367 out.go:97] [download-only-763965] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0927 00:33:29.916934  559367 notify.go:220] Checking for updates...
	I0927 00:33:29.918355  559367 out.go:169] MINIKUBE_LOCATION=19711
	I0927 00:33:29.919595  559367 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 00:33:29.921056  559367 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19711-553751/kubeconfig
	I0927 00:33:29.922446  559367 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-553751/.minikube
	I0927 00:33:29.923612  559367 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0927 00:33:29.925588  559367 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0927 00:33:29.925826  559367 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 00:33:29.946262  559367 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0927 00:33:29.946375  559367 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 00:33:30.005171  559367 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-27 00:33:29.994511918 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 00:33:30.005302  559367 docker.go:318] overlay module found
	I0927 00:33:30.006880  559367 out.go:97] Using the docker driver based on user configuration
	I0927 00:33:30.006931  559367 start.go:297] selected driver: docker
	I0927 00:33:30.006940  559367 start.go:901] validating driver "docker" against <nil>
	I0927 00:33:30.007063  559367 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 00:33:30.061990  559367 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-27 00:33:30.051421651 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 00:33:30.062247  559367 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 00:33:30.062583  559367 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0927 00:33:30.062934  559367 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0927 00:33:30.064527  559367 out.go:169] Using Docker driver with root privileges
	I0927 00:33:30.065538  559367 cni.go:84] Creating CNI manager for ""
	I0927 00:33:30.065615  559367 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0927 00:33:30.065630  559367 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0927 00:33:30.065741  559367 start.go:340] cluster config:
	{Name:download-only-763965 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-763965 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:33:30.067665  559367 out.go:97] Starting "download-only-763965" primary control-plane node in "download-only-763965" cluster
	I0927 00:33:30.067701  559367 cache.go:121] Beginning downloading kic base image for docker with crio
	I0927 00:33:30.069661  559367 out.go:97] Pulling base image v0.0.45-1727108449-19696 ...
	I0927 00:33:30.069698  559367 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 00:33:30.069768  559367 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0927 00:33:30.090260  559367 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0927 00:33:30.090414  559367 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory
	I0927 00:33:30.090437  559367 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory, skipping pull
	I0927 00:33:30.090442  559367 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 exists in cache, skipping pull
	I0927 00:33:30.090450  559367 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 as a tarball
	I0927 00:33:30.134610  559367 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I0927 00:33:30.134638  559367 cache.go:56] Caching tarball of preloaded images
	I0927 00:33:30.134856  559367 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 00:33:30.136403  559367 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0927 00:33:30.136438  559367 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 ...
	I0927 00:33:30.264603  559367 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:8285fc512c7462f100de137f91fcd0ae -> /home/jenkins/minikube-integration/19711-553751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I0927 00:33:34.376417  559367 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 ...
	I0927 00:33:34.376551  559367 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19711-553751/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-763965 host does not exist
	  To start a cluster, run: "minikube start -p download-only-763965"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-763965
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
I0927 00:33:37.384230  559158 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-878606 --alsologtostderr --binary-mirror http://127.0.0.1:39419 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-878606" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-878606
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-220192
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-220192: exit status 85 (65.295175ms)

                                                
                                                
-- stdout --
	* Profile "addons-220192" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-220192"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-220192
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-220192: exit status 85 (63.626957ms)

                                                
                                                
-- stdout --
	* Profile "addons-220192" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-220192"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (215.91s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-220192 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-220192 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (3m35.913832272s)
--- PASS: TestAddons/Setup (215.91s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.23s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-220192 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-220192 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.23s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.75s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-hr4wl" [96bdfa99-ba55-49b9-a159-ee909264a292] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004168362s
addons_test.go:789: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-220192
addons_test.go:789: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-220192: (5.748299139s)
--- PASS: TestAddons/parallel/InspektorGadget (10.75s)

                                                
                                    
x
+
TestAddons/parallel/CSI (51.82s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0927 00:45:34.263951  559158 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0927 00:45:34.275252  559158 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0927 00:45:34.275284  559158 kapi.go:107] duration metric: took 11.347559ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 11.357512ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-220192 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-220192 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-220192 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-220192 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-220192 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-220192 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-220192 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-220192 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-220192 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-220192 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-220192 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-220192 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-220192 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-220192 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-220192 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-220192 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-220192 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-220192 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-220192 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-220192 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-220192 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-220192 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [2df4170f-73c8-4b4d-8380-feb721b377ce] Pending
helpers_test.go:344: "task-pv-pod" [2df4170f-73c8-4b4d-8380-feb721b377ce] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [2df4170f-73c8-4b4d-8380-feb721b377ce] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.003997957s
addons_test.go:528: (dbg) Run:  kubectl --context addons-220192 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-220192 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-220192 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-220192 delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context addons-220192 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-220192 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-220192 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-220192 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-220192 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-220192 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-220192 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [96227f28-f380-4f5c-b267-398b2a64b6a6] Pending
helpers_test.go:344: "task-pv-pod-restore" [96227f28-f380-4f5c-b267-398b2a64b6a6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [96227f28-f380-4f5c-b267-398b2a64b6a6] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004067258s
addons_test.go:570: (dbg) Run:  kubectl --context addons-220192 delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context addons-220192 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-220192 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-arm64 -p addons-220192 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-arm64 -p addons-220192 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.97060211s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-arm64 -p addons-220192 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (51.82s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-220192 --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-ghjcv" [20354f8a-a76f-41d7-9f24-514dd1fc70b6] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-ghjcv" [20354f8a-a76f-41d7-9f24-514dd1fc70b6] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-ghjcv" [20354f8a-a76f-41d7-9f24-514dd1fc70b6] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.00321151s
addons_test.go:777: (dbg) Run:  out/minikube-linux-arm64 -p addons-220192 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-arm64 -p addons-220192 addons disable headlamp --alsologtostderr -v=1: (5.724356892s)
--- PASS: TestAddons/parallel/Headlamp (16.66s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.54s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-4hjb6" [34e27e97-4594-467d-9d26-e58eb2d91ac4] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.00386542s
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-220192
--- PASS: TestAddons/parallel/CloudSpanner (6.54s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.31s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-220192 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-220192 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-220192 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-220192 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-220192 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-220192 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-220192 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-220192 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [921215f5-70ed-495c-98b5-d82f0b25f6e1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [921215f5-70ed-495c-98b5-d82f0b25f6e1] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [921215f5-70ed-495c-98b5-d82f0b25f6e1] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003759041s
addons_test.go:938: (dbg) Run:  kubectl --context addons-220192 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-arm64 -p addons-220192 ssh "cat /opt/local-path-provisioner/pvc-6c77448d-421d-4ba2-854e-92e4b80ec990_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-220192 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-220192 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-arm64 -p addons-220192 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (10.31s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.49s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-dqrvw" [e6729774-57a9-49c2-a405-b1a541551dd4] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003619222s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-220192
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.49s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-rxkjm" [419a3bc8-9016-4ed2-8372-140f94fa5993] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003561467s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-arm64 -p addons-220192 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-arm64 -p addons-220192 addons disable yakd --alsologtostderr -v=1: (5.694519159s)
--- PASS: TestAddons/parallel/Yakd (11.70s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.12s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-220192
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-220192: (11.858943748s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-220192
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-220192
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-220192
--- PASS: TestAddons/StoppedEnableDisable (12.12s)

                                                
                                    
x
+
TestCertOptions (38.26s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-617701 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-617701 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (35.653586403s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-617701 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-617701 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-617701 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-617701" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-617701
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-617701: (1.985617277s)
--- PASS: TestCertOptions (38.26s)

                                                
                                    
x
+
TestCertExpiration (239.04s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-686343 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-686343 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (39.715918137s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-686343 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-686343 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (16.881359009s)
helpers_test.go:175: Cleaning up "cert-expiration-686343" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-686343
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-686343: (2.442208113s)
--- PASS: TestCertExpiration (239.04s)

                                                
                                    
x
+
TestForceSystemdFlag (36.37s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-867492 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-867492 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (32.773766218s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-867492 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-867492" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-867492
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-867492: (3.134523623s)
--- PASS: TestForceSystemdFlag (36.37s)

                                                
                                    
x
+
TestForceSystemdEnv (40.78s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-980399 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-980399 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.373996613s)
helpers_test.go:175: Cleaning up "force-systemd-env-980399" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-980399
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-980399: (2.408413502s)
--- PASS: TestForceSystemdEnv (40.78s)

                                                
                                    
x
+
TestErrorSpam/setup (33.4s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-776570 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-776570 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-776570 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-776570 --driver=docker  --container-runtime=crio: (33.404801434s)
--- PASS: TestErrorSpam/setup (33.40s)

                                                
                                    
x
+
TestErrorSpam/start (0.68s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-776570 --log_dir /tmp/nospam-776570 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-776570 --log_dir /tmp/nospam-776570 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-776570 --log_dir /tmp/nospam-776570 start --dry-run
--- PASS: TestErrorSpam/start (0.68s)

                                                
                                    
x
+
TestErrorSpam/status (1.06s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-776570 --log_dir /tmp/nospam-776570 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-776570 --log_dir /tmp/nospam-776570 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-776570 --log_dir /tmp/nospam-776570 status
--- PASS: TestErrorSpam/status (1.06s)

                                                
                                    
x
+
TestErrorSpam/pause (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-776570 --log_dir /tmp/nospam-776570 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-776570 --log_dir /tmp/nospam-776570 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-776570 --log_dir /tmp/nospam-776570 pause
--- PASS: TestErrorSpam/pause (1.72s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-776570 --log_dir /tmp/nospam-776570 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-776570 --log_dir /tmp/nospam-776570 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-776570 --log_dir /tmp/nospam-776570 unpause
--- PASS: TestErrorSpam/unpause (1.73s)

                                                
                                    
x
+
TestErrorSpam/stop (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-776570 --log_dir /tmp/nospam-776570 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-776570 --log_dir /tmp/nospam-776570 stop: (1.268287536s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-776570 --log_dir /tmp/nospam-776570 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-776570 --log_dir /tmp/nospam-776570 stop
--- PASS: TestErrorSpam/stop (1.46s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19711-553751/.minikube/files/etc/test/nested/copy/559158/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (45.85s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-506734 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-506734 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (45.850661603s)
--- PASS: TestFunctional/serial/StartWithProxy (45.85s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (42.08s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0927 00:54:12.292673  559158 config.go:182] Loaded profile config "functional-506734": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-506734 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-506734 --alsologtostderr -v=8: (42.080233361s)
functional_test.go:663: soft start took 42.080765154s for "functional-506734" cluster.
I0927 00:54:54.373192  559158 config.go:182] Loaded profile config "functional-506734": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (42.08s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-506734 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-506734 cache add registry.k8s.io/pause:3.1: (1.525003363s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-506734 cache add registry.k8s.io/pause:3.3: (1.472657308s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-506734 cache add registry.k8s.io/pause:latest: (1.432544573s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-506734 /tmp/TestFunctionalserialCacheCmdcacheadd_local285510768/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 cache add minikube-local-cache-test:functional-506734
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 cache delete minikube-local-cache-test:functional-506734
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-506734
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.37s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-506734 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (285.538184ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-506734 cache reload: (1.216582898s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 kubectl -- --context functional-506734 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-506734 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (30.8s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-506734 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-506734 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (30.802599473s)
functional_test.go:761: restart took 30.802714342s for "functional-506734" cluster.
I0927 00:55:34.036804  559158 config.go:182] Loaded profile config "functional-506734": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (30.80s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-506734 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-506734 logs: (1.649640908s)
--- PASS: TestFunctional/serial/LogsCmd (1.65s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 logs --file /tmp/TestFunctionalserialLogsFileCmd3475785361/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-506734 logs --file /tmp/TestFunctionalserialLogsFileCmd3475785361/001/logs.txt: (1.648869895s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.65s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.28s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-506734 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-506734
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-506734: exit status 115 (503.724456ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31654 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-506734 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.28s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-506734 config get cpus: exit status 14 (60.624696ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-506734 config get cpus: exit status 14 (73.168684ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-506734 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-506734 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 587162: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.32s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-506734 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-506734 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (177.012642ms)

                                                
                                                
-- stdout --
	* [functional-506734] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19711-553751/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-553751/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 00:56:13.639235  586852 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:56:13.639394  586852 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:56:13.639410  586852 out.go:358] Setting ErrFile to fd 2...
	I0927 00:56:13.639416  586852 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:56:13.639681  586852 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-553751/.minikube/bin
	I0927 00:56:13.640063  586852 out.go:352] Setting JSON to false
	I0927 00:56:13.641032  586852 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":16717,"bootTime":1727381857,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0927 00:56:13.641109  586852 start.go:139] virtualization:  
	I0927 00:56:13.644124  586852 out.go:177] * [functional-506734] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0927 00:56:13.647580  586852 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 00:56:13.647650  586852 notify.go:220] Checking for updates...
	I0927 00:56:13.653624  586852 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 00:56:13.656391  586852 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-553751/kubeconfig
	I0927 00:56:13.659062  586852 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-553751/.minikube
	I0927 00:56:13.661601  586852 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0927 00:56:13.664191  586852 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 00:56:13.667349  586852 config.go:182] Loaded profile config "functional-506734": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:56:13.667906  586852 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 00:56:13.698213  586852 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0927 00:56:13.698329  586852 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 00:56:13.752336  586852 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-27 00:56:13.742377428 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 00:56:13.752446  586852 docker.go:318] overlay module found
	I0927 00:56:13.755323  586852 out.go:177] * Using the docker driver based on existing profile
	I0927 00:56:13.758031  586852 start.go:297] selected driver: docker
	I0927 00:56:13.758049  586852 start.go:901] validating driver "docker" against &{Name:functional-506734 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-506734 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:56:13.758165  586852 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 00:56:13.761379  586852 out.go:201] 
	W0927 00:56:13.763997  586852 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0927 00:56:13.766483  586852 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-506734 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-506734 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-506734 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (184.085871ms)

                                                
                                                
-- stdout --
	* [functional-506734] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19711-553751/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-553751/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 00:56:13.463253  586808 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:56:13.463470  586808 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:56:13.463497  586808 out.go:358] Setting ErrFile to fd 2...
	I0927 00:56:13.463516  586808 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:56:13.463881  586808 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-553751/.minikube/bin
	I0927 00:56:13.464284  586808 out.go:352] Setting JSON to false
	I0927 00:56:13.465246  586808 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":16717,"bootTime":1727381857,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0927 00:56:13.465341  586808 start.go:139] virtualization:  
	I0927 00:56:13.468285  586808 out.go:177] * [functional-506734] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0927 00:56:13.470632  586808 notify.go:220] Checking for updates...
	I0927 00:56:13.473202  586808 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 00:56:13.475112  586808 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 00:56:13.477163  586808 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-553751/kubeconfig
	I0927 00:56:13.479343  586808 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-553751/.minikube
	I0927 00:56:13.481631  586808 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0927 00:56:13.483778  586808 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 00:56:13.486573  586808 config.go:182] Loaded profile config "functional-506734": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 00:56:13.487143  586808 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 00:56:13.514239  586808 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0927 00:56:13.514356  586808 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 00:56:13.575602  586808 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-27 00:56:13.563259109 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 00:56:13.575714  586808 docker.go:318] overlay module found
	I0927 00:56:13.578297  586808 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0927 00:56:13.580635  586808 start.go:297] selected driver: docker
	I0927 00:56:13.580652  586808 start.go:901] validating driver "docker" against &{Name:functional-506734 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-506734 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:56:13.580790  586808 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 00:56:13.583852  586808 out.go:201] 
	W0927 00:56:13.586460  586808 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0927 00:56:13.589033  586808 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-506734 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-506734 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-f55wg" [e3b1cd7a-2808-4d3b-bdd4-e2876c75996e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-f55wg" [e3b1cd7a-2808-4d3b-bdd4-e2876c75996e] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003755203s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30354
functional_test.go:1675: http://192.168.49.2:30354: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-f55wg

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30354
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.68s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [82d11fad-1db4-4a52-9603-b36f97a26026] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003364785s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-506734 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-506734 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-506734 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-506734 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [12f5eb36-a9ad-4002-8e1f-4315769b1d6b] Pending
helpers_test.go:344: "sp-pod" [12f5eb36-a9ad-4002-8e1f-4315769b1d6b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [12f5eb36-a9ad-4002-8e1f-4315769b1d6b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004174001s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-506734 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-506734 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-506734 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ea1130a9-1cd1-40d3-b946-22bc642de7ba] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ea1130a9-1cd1-40d3-b946-22bc642de7ba] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004279773s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-506734 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.92s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 ssh -n functional-506734 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 cp functional-506734:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2146268824/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 ssh -n functional-506734 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 ssh -n functional-506734 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/559158/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 ssh "sudo cat /etc/test/nested/copy/559158/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/559158.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 ssh "sudo cat /etc/ssl/certs/559158.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/559158.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 ssh "sudo cat /usr/share/ca-certificates/559158.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/5591582.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 ssh "sudo cat /etc/ssl/certs/5591582.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/5591582.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 ssh "sudo cat /usr/share/ca-certificates/5591582.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-506734 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-506734 ssh "sudo systemctl is-active docker": exit status 1 (331.360827ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-506734 ssh "sudo systemctl is-active containerd": exit status 1 (261.336592ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-506734 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-506734 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-506734 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-506734 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 584730: os: process already finished
helpers_test.go:502: unable to terminate pid 584553: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-506734 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-506734 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [2121e860-8313-4c09-b4ba-36829736da05] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [2121e860-8313-4c09-b4ba-36829736da05] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.00482593s
I0927 00:55:52.851999  559158 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-506734 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.68.200 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-506734 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-506734 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-506734 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-kzvfp" [9355a9db-9650-46ed-9e48-f45ed8b0f76b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-kzvfp" [9355a9db-9650-46ed-9e48-f45ed8b0f76b] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.003856668s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "356.604386ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "59.607958ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "341.741893ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "58.688868ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-506734 /tmp/TestFunctionalparallelMountCmdany-port397553990/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727398569121298769" to /tmp/TestFunctionalparallelMountCmdany-port397553990/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727398569121298769" to /tmp/TestFunctionalparallelMountCmdany-port397553990/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727398569121298769" to /tmp/TestFunctionalparallelMountCmdany-port397553990/001/test-1727398569121298769
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-506734 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (312.566063ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0927 00:56:09.434147  559158 retry.go:31] will retry after 449.758313ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 27 00:56 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 27 00:56 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 27 00:56 test-1727398569121298769
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 ssh cat /mount-9p/test-1727398569121298769
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-506734 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [8b3cc40b-0778-4e3a-839c-c0dceceb74f3] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [8b3cc40b-0778-4e3a-839c-c0dceceb74f3] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [8b3cc40b-0778-4e3a-839c-c0dceceb74f3] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004548084s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-506734 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-506734 /tmp/TestFunctionalparallelMountCmdany-port397553990/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.92s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 service list -o json
functional_test.go:1494: Took "562.42889ms" to run "out/minikube-linux-arm64 -p functional-506734 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:32170
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:32170
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-506734 /tmp/TestFunctionalparallelMountCmdspecific-port2851692423/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-506734 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (369.751596ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0927 00:56:17.409467  559158 retry.go:31] will retry after 641.244621ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-506734 /tmp/TestFunctionalparallelMountCmdspecific-port2851692423/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-506734 ssh "sudo umount -f /mount-9p": exit status 1 (376.53352ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-506734 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-506734 /tmp/TestFunctionalparallelMountCmdspecific-port2851692423/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-506734 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2748858653/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-506734 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2748858653/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-506734 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2748858653/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-506734 ssh "findmnt -T" /mount1: exit status 1 (915.765322ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0927 00:56:20.271674  559158 retry.go:31] will retry after 370.186801ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-506734 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-506734 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2748858653/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-506734 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2748858653/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-506734 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2748858653/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.49s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-506734 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-506734
localhost/kicbase/echo-server:functional-506734
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-506734 image ls --format short --alsologtostderr:
I0927 00:56:30.068230  589671 out.go:345] Setting OutFile to fd 1 ...
I0927 00:56:30.068455  589671 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:56:30.068490  589671 out.go:358] Setting ErrFile to fd 2...
I0927 00:56:30.068517  589671 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:56:30.068829  589671 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-553751/.minikube/bin
I0927 00:56:30.069681  589671 config.go:182] Loaded profile config "functional-506734": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0927 00:56:30.070226  589671 config.go:182] Loaded profile config "functional-506734": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0927 00:56:30.070882  589671 cli_runner.go:164] Run: docker container inspect functional-506734 --format={{.State.Status}}
I0927 00:56:30.097605  589671 ssh_runner.go:195] Run: systemctl --version
I0927 00:56:30.097686  589671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-506734
I0927 00:56:30.124458  589671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/functional-506734/id_rsa Username:docker}
I0927 00:56:30.224384  589671 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-506734 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                 | latest             | 195245f0c7927 | 197MB  |
| localhost/kicbase/echo-server           | functional-506734  | ce2d2cda2d858 | 4.79MB |
| localhost/minikube-local-cache-test     | functional-506734  | 2bfb377103379 | 3.33kB |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 6a23fa8fd2b78 | 90.3MB |
| registry.k8s.io/pause                   | 3.10               | afb61768ce381 | 520kB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 279f381cb3736 | 86.9MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/coredns/coredns         | v1.11.3            | 2f6c962e7b831 | 61.6MB |
| registry.k8s.io/kube-apiserver          | v1.31.1            | d3f53a98c0a9d | 92.6MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 24a140c548c07 | 96MB   |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 7f8aa378bb47d | 67MB   |
| docker.io/library/nginx                 | alpine             | b887aca7aed61 | 48.4MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 27e3830e14027 | 140MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-506734 image ls --format table --alsologtostderr:
I0927 00:56:30.634985  589820 out.go:345] Setting OutFile to fd 1 ...
I0927 00:56:30.635194  589820 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:56:30.635225  589820 out.go:358] Setting ErrFile to fd 2...
I0927 00:56:30.635247  589820 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:56:30.635537  589820 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-553751/.minikube/bin
I0927 00:56:30.636294  589820 config.go:182] Loaded profile config "functional-506734": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0927 00:56:30.636466  589820 config.go:182] Loaded profile config "functional-506734": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0927 00:56:30.636988  589820 cli_runner.go:164] Run: docker container inspect functional-506734 --format={{.State.Status}}
I0927 00:56:30.658842  589820 ssh_runner.go:195] Run: systemctl --version
I0927 00:56:30.658893  589820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-506734
I0927 00:56:30.696471  589820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/functional-506734/id_rsa Username:docker}
I0927 00:56:30.792546  589820 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-506734 image ls --format json --alsologtostderr:
[{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":["docker.io/library/nginx@sha256:19db381c08a95b2040d5637a65c7a59af6c2f21444b0c8730505280a0255fb53","docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf"],"repoTags":["docker.io/library/nginx:alpine"],"size":"48375489"},{"id":"195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3","docker.io/library/nginx@sha256:9f661996f4d1cea788f329b8145660a1124a5a94eec8cea1dba0d564423ad171"],"repoTags":["docker.io/library/nginx:latest"],"size":"197172029"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-506734"],"size":"4788229"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6","registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61647114"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6
ddfcd0b1b1643d3450e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"86930758"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a","registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139912446"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82
d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:65212209347a96b08a97e679b98dca46885f09cf3a53e8d13b28d2c083a5b690","registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67007814"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"519877"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:4d39335073da9
a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"90295858"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-mini
kube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"2bfb377103379bb3f8acac3918ff2c778407e9bae55c2ce0db5571c3a94e2401","repoDigests":["localhost/minikube-local-cache-test@sha256:4e5d600e7794e547ac65261510b780f3925e58150eb991e85d756e018de04c87"],"repoTags":["localhost/minikube-local-cache-test:functional-506734"],"size":"3330"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb","registry.k8s.io/kube-apiserver@sha256:e3a40e6c6e99ba4a4d72432b3eda702099a2926e49d4afeb6138f2d95e6371ef"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"92632544"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":["registry.k8s.io/k
ube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:7b3bf9f1e260ccb1fd543570e1e9869a373f716fb050cd23a6a2771aa4e06ae9"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"95951255"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-506734 image ls --format json --alsologtostderr:
I0927 00:56:30.367184  589737 out.go:345] Setting OutFile to fd 1 ...
I0927 00:56:30.367403  589737 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:56:30.367429  589737 out.go:358] Setting ErrFile to fd 2...
I0927 00:56:30.367448  589737 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:56:30.367732  589737 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-553751/.minikube/bin
I0927 00:56:30.368499  589737 config.go:182] Loaded profile config "functional-506734": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0927 00:56:30.368680  589737 config.go:182] Loaded profile config "functional-506734": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0927 00:56:30.369198  589737 cli_runner.go:164] Run: docker container inspect functional-506734 --format={{.State.Status}}
I0927 00:56:30.397302  589737 ssh_runner.go:195] Run: systemctl --version
I0927 00:56:30.397356  589737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-506734
I0927 00:56:30.427445  589737 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/functional-506734/id_rsa Username:docker}
I0927 00:56:30.519149  589737 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-506734 image ls --format yaml --alsologtostderr:
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "86930758"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:7b3bf9f1e260ccb1fd543570e1e9869a373f716fb050cd23a6a2771aa4e06ae9
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "95951255"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:65212209347a96b08a97e679b98dca46885f09cf3a53e8d13b28d2c083a5b690
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67007814"
- id: 6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "90295858"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
- registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139912446"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61647114"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
- registry.k8s.io/kube-apiserver@sha256:e3a40e6c6e99ba4a4d72432b3eda702099a2926e49d4afeb6138f2d95e6371ef
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "92632544"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "519877"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests:
- docker.io/library/nginx@sha256:19db381c08a95b2040d5637a65c7a59af6c2f21444b0c8730505280a0255fb53
- docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf
repoTags:
- docker.io/library/nginx:alpine
size: "48375489"
- id: 195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
- docker.io/library/nginx@sha256:9f661996f4d1cea788f329b8145660a1124a5a94eec8cea1dba0d564423ad171
repoTags:
- docker.io/library/nginx:latest
size: "197172029"
- id: 2bfb377103379bb3f8acac3918ff2c778407e9bae55c2ce0db5571c3a94e2401
repoDigests:
- localhost/minikube-local-cache-test@sha256:4e5d600e7794e547ac65261510b780f3925e58150eb991e85d756e018de04c87
repoTags:
- localhost/minikube-local-cache-test:functional-506734
size: "3330"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-506734
size: "4788229"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-506734 image ls --format yaml --alsologtostderr:
I0927 00:56:30.051882  589672 out.go:345] Setting OutFile to fd 1 ...
I0927 00:56:30.052077  589672 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:56:30.052088  589672 out.go:358] Setting ErrFile to fd 2...
I0927 00:56:30.052093  589672 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:56:30.052540  589672 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-553751/.minikube/bin
I0927 00:56:30.053357  589672 config.go:182] Loaded profile config "functional-506734": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0927 00:56:30.053536  589672 config.go:182] Loaded profile config "functional-506734": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0927 00:56:30.054148  589672 cli_runner.go:164] Run: docker container inspect functional-506734 --format={{.State.Status}}
I0927 00:56:30.084892  589672 ssh_runner.go:195] Run: systemctl --version
I0927 00:56:30.084954  589672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-506734
I0927 00:56:30.109911  589672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/functional-506734/id_rsa Username:docker}
I0927 00:56:30.207263  589672 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-506734 ssh pgrep buildkitd: exit status 1 (317.902836ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 image build -t localhost/my-image:functional-506734 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-506734 image build -t localhost/my-image:functional-506734 testdata/build --alsologtostderr: (3.076339936s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-arm64 -p functional-506734 image build -t localhost/my-image:functional-506734 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> f62d7341dee
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-506734
--> e82e757dc4a
Successfully tagged localhost/my-image:functional-506734
e82e757dc4a8043202ef4369653888dab240950a309df6f221590abe576875ca
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-506734 image build -t localhost/my-image:functional-506734 testdata/build --alsologtostderr:
I0927 00:56:30.645828  589825 out.go:345] Setting OutFile to fd 1 ...
I0927 00:56:30.646550  589825 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:56:30.646565  589825 out.go:358] Setting ErrFile to fd 2...
I0927 00:56:30.646572  589825 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 00:56:30.646902  589825 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-553751/.minikube/bin
I0927 00:56:30.647631  589825 config.go:182] Loaded profile config "functional-506734": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0927 00:56:30.649373  589825 config.go:182] Loaded profile config "functional-506734": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0927 00:56:30.649928  589825 cli_runner.go:164] Run: docker container inspect functional-506734 --format={{.State.Status}}
I0927 00:56:30.679165  589825 ssh_runner.go:195] Run: systemctl --version
I0927 00:56:30.679234  589825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-506734
I0927 00:56:30.700024  589825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33511 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/functional-506734/id_rsa Username:docker}
I0927 00:56:30.792476  589825 build_images.go:161] Building image from path: /tmp/build.486176070.tar
I0927 00:56:30.792537  589825 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0927 00:56:30.802039  589825 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.486176070.tar
I0927 00:56:30.810979  589825 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.486176070.tar: stat -c "%s %y" /var/lib/minikube/build/build.486176070.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.486176070.tar': No such file or directory
I0927 00:56:30.811008  589825 ssh_runner.go:362] scp /tmp/build.486176070.tar --> /var/lib/minikube/build/build.486176070.tar (3072 bytes)
I0927 00:56:30.855478  589825 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.486176070
I0927 00:56:30.864298  589825 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.486176070 -xf /var/lib/minikube/build/build.486176070.tar
I0927 00:56:30.873354  589825 crio.go:315] Building image: /var/lib/minikube/build/build.486176070
I0927 00:56:30.873433  589825 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-506734 /var/lib/minikube/build/build.486176070 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0927 00:56:33.634087  589825 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-506734 /var/lib/minikube/build/build.486176070 --cgroup-manager=cgroupfs: (2.760627296s)
I0927 00:56:33.634151  589825 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.486176070
I0927 00:56:33.642879  589825 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.486176070.tar
I0927 00:56:33.651436  589825 build_images.go:217] Built localhost/my-image:functional-506734 from /tmp/build.486176070.tar
I0927 00:56:33.651482  589825 build_images.go:133] succeeded building to: functional-506734
I0927 00:56:33.651488  589825 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-506734
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 image load --daemon kicbase/echo-server:functional-506734 --alsologtostderr
2024/09/27 00:56:24 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-506734 image load --daemon kicbase/echo-server:functional-506734 --alsologtostderr: (1.203078437s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 image load --daemon kicbase/echo-server:functional-506734 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-506734
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 image load --daemon kicbase/echo-server:functional-506734 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 image save kicbase/echo-server:functional-506734 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 image rm kicbase/echo-server:functional-506734 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-506734
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-506734 image save --daemon kicbase/echo-server:functional-506734 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-506734
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-506734
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-506734
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-506734
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (175.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-129707 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0927 00:57:14.422944  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:57:14.429356  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:57:14.440747  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:57:14.462110  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:57:14.503466  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:57:14.584829  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:57:14.746409  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:57:15.067902  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:57:15.709960  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:57:16.991803  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:57:19.553704  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:57:24.675209  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:57:34.917328  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:57:55.399188  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.crt: no such file or directory" logger="UnhandledError"
E0927 00:58:36.360526  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-129707 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m54.731817153s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (175.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-129707 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-129707 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-129707 -- rollout status deployment/busybox: (5.722294753s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-129707 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-129707 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-129707 -- exec busybox-7dff88458-25npm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-129707 -- exec busybox-7dff88458-j9xjc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-129707 -- exec busybox-7dff88458-lxzl8 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-129707 -- exec busybox-7dff88458-25npm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-129707 -- exec busybox-7dff88458-j9xjc -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-129707 -- exec busybox-7dff88458-lxzl8 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-129707 -- exec busybox-7dff88458-25npm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-129707 -- exec busybox-7dff88458-j9xjc -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-129707 -- exec busybox-7dff88458-lxzl8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-129707 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-129707 -- exec busybox-7dff88458-25npm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-129707 -- exec busybox-7dff88458-25npm -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-129707 -- exec busybox-7dff88458-j9xjc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-129707 -- exec busybox-7dff88458-j9xjc -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-129707 -- exec busybox-7dff88458-lxzl8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-129707 -- exec busybox-7dff88458-lxzl8 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (32.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-129707 -v=7 --alsologtostderr
E0927 00:59:58.282553  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-129707 -v=7 --alsologtostderr: (31.86185978s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (32.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-129707 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 cp testdata/cp-test.txt ha-129707:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 ssh -n ha-129707 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 cp ha-129707:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2433342164/001/cp-test_ha-129707.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 ssh -n ha-129707 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 cp ha-129707:/home/docker/cp-test.txt ha-129707-m02:/home/docker/cp-test_ha-129707_ha-129707-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 ssh -n ha-129707 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 ssh -n ha-129707-m02 "sudo cat /home/docker/cp-test_ha-129707_ha-129707-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 cp ha-129707:/home/docker/cp-test.txt ha-129707-m03:/home/docker/cp-test_ha-129707_ha-129707-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 ssh -n ha-129707 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 ssh -n ha-129707-m03 "sudo cat /home/docker/cp-test_ha-129707_ha-129707-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 cp ha-129707:/home/docker/cp-test.txt ha-129707-m04:/home/docker/cp-test_ha-129707_ha-129707-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 ssh -n ha-129707 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 ssh -n ha-129707-m04 "sudo cat /home/docker/cp-test_ha-129707_ha-129707-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 cp testdata/cp-test.txt ha-129707-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 ssh -n ha-129707-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 cp ha-129707-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2433342164/001/cp-test_ha-129707-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 ssh -n ha-129707-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 cp ha-129707-m02:/home/docker/cp-test.txt ha-129707:/home/docker/cp-test_ha-129707-m02_ha-129707.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 ssh -n ha-129707-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 ssh -n ha-129707 "sudo cat /home/docker/cp-test_ha-129707-m02_ha-129707.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 cp ha-129707-m02:/home/docker/cp-test.txt ha-129707-m03:/home/docker/cp-test_ha-129707-m02_ha-129707-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 ssh -n ha-129707-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 ssh -n ha-129707-m03 "sudo cat /home/docker/cp-test_ha-129707-m02_ha-129707-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 cp ha-129707-m02:/home/docker/cp-test.txt ha-129707-m04:/home/docker/cp-test_ha-129707-m02_ha-129707-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 ssh -n ha-129707-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 ssh -n ha-129707-m04 "sudo cat /home/docker/cp-test_ha-129707-m02_ha-129707-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 cp testdata/cp-test.txt ha-129707-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 ssh -n ha-129707-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 cp ha-129707-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2433342164/001/cp-test_ha-129707-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 ssh -n ha-129707-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 cp ha-129707-m03:/home/docker/cp-test.txt ha-129707:/home/docker/cp-test_ha-129707-m03_ha-129707.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 ssh -n ha-129707-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 ssh -n ha-129707 "sudo cat /home/docker/cp-test_ha-129707-m03_ha-129707.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 cp ha-129707-m03:/home/docker/cp-test.txt ha-129707-m02:/home/docker/cp-test_ha-129707-m03_ha-129707-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 ssh -n ha-129707-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 ssh -n ha-129707-m02 "sudo cat /home/docker/cp-test_ha-129707-m03_ha-129707-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 cp ha-129707-m03:/home/docker/cp-test.txt ha-129707-m04:/home/docker/cp-test_ha-129707-m03_ha-129707-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 ssh -n ha-129707-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 ssh -n ha-129707-m04 "sudo cat /home/docker/cp-test_ha-129707-m03_ha-129707-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 cp testdata/cp-test.txt ha-129707-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 ssh -n ha-129707-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 cp ha-129707-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2433342164/001/cp-test_ha-129707-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 ssh -n ha-129707-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 cp ha-129707-m04:/home/docker/cp-test.txt ha-129707:/home/docker/cp-test_ha-129707-m04_ha-129707.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 ssh -n ha-129707-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 ssh -n ha-129707 "sudo cat /home/docker/cp-test_ha-129707-m04_ha-129707.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 cp ha-129707-m04:/home/docker/cp-test.txt ha-129707-m02:/home/docker/cp-test_ha-129707-m04_ha-129707-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 ssh -n ha-129707-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 ssh -n ha-129707-m02 "sudo cat /home/docker/cp-test_ha-129707-m04_ha-129707-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 cp ha-129707-m04:/home/docker/cp-test.txt ha-129707-m03:/home/docker/cp-test_ha-129707-m04_ha-129707-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 ssh -n ha-129707-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 ssh -n ha-129707-m03 "sudo cat /home/docker/cp-test_ha-129707-m04_ha-129707-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 node stop m02 -v=7 --alsologtostderr
E0927 01:00:43.385997  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/functional-506734/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:00:43.392467  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/functional-506734/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:00:43.403897  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/functional-506734/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:00:43.425339  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/functional-506734/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:00:43.466710  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/functional-506734/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:00:43.548134  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/functional-506734/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:00:43.709629  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/functional-506734/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:00:44.030996  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/functional-506734/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:00:44.673007  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/functional-506734/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-129707 node stop m02 -v=7 --alsologtostderr: (11.988928801s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 status -v=7 --alsologtostderr
E0927 01:00:45.955238  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/functional-506734/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-129707 status -v=7 --alsologtostderr: exit status 7 (698.488207ms)

                                                
                                                
-- stdout --
	ha-129707
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-129707-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-129707-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-129707-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 01:00:45.870098  605516 out.go:345] Setting OutFile to fd 1 ...
	I0927 01:00:45.870291  605516 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:00:45.870305  605516 out.go:358] Setting ErrFile to fd 2...
	I0927 01:00:45.870312  605516 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:00:45.870591  605516 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-553751/.minikube/bin
	I0927 01:00:45.870852  605516 out.go:352] Setting JSON to false
	I0927 01:00:45.870891  605516 mustload.go:65] Loading cluster: ha-129707
	I0927 01:00:45.870981  605516 notify.go:220] Checking for updates...
	I0927 01:00:45.871371  605516 config.go:182] Loaded profile config "ha-129707": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:00:45.871394  605516 status.go:174] checking status of ha-129707 ...
	I0927 01:00:45.872298  605516 cli_runner.go:164] Run: docker container inspect ha-129707 --format={{.State.Status}}
	I0927 01:00:45.895116  605516 status.go:364] ha-129707 host status = "Running" (err=<nil>)
	I0927 01:00:45.895152  605516 host.go:66] Checking if "ha-129707" exists ...
	I0927 01:00:45.895541  605516 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-129707
	I0927 01:00:45.924236  605516 host.go:66] Checking if "ha-129707" exists ...
	I0927 01:00:45.924544  605516 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 01:00:45.924585  605516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-129707
	I0927 01:00:45.944405  605516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33516 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/ha-129707/id_rsa Username:docker}
	I0927 01:00:46.039813  605516 ssh_runner.go:195] Run: systemctl --version
	I0927 01:00:46.044040  605516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:00:46.055176  605516 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 01:00:46.107211  605516 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-27 01:00:46.095904834 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 01:00:46.107806  605516 kubeconfig.go:125] found "ha-129707" server: "https://192.168.49.254:8443"
	I0927 01:00:46.107837  605516 api_server.go:166] Checking apiserver status ...
	I0927 01:00:46.107885  605516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:00:46.119039  605516 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1409/cgroup
	I0927 01:00:46.129866  605516 api_server.go:182] apiserver freezer: "11:freezer:/docker/4b40b7bb59aa50f33dc672c905f0201cdd5b52747789509b91696708284c59b1/crio/crio-701fe24aab33bd94fda53659950c5a8e208fd329b2567bdee6b163d7f9b24f39"
	I0927 01:00:46.129962  605516 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4b40b7bb59aa50f33dc672c905f0201cdd5b52747789509b91696708284c59b1/crio/crio-701fe24aab33bd94fda53659950c5a8e208fd329b2567bdee6b163d7f9b24f39/freezer.state
	I0927 01:00:46.138907  605516 api_server.go:204] freezer state: "THAWED"
	I0927 01:00:46.138935  605516 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0927 01:00:46.147167  605516 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0927 01:00:46.147197  605516 status.go:456] ha-129707 apiserver status = Running (err=<nil>)
	I0927 01:00:46.147212  605516 status.go:176] ha-129707 status: &{Name:ha-129707 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 01:00:46.147229  605516 status.go:174] checking status of ha-129707-m02 ...
	I0927 01:00:46.147544  605516 cli_runner.go:164] Run: docker container inspect ha-129707-m02 --format={{.State.Status}}
	I0927 01:00:46.164705  605516 status.go:364] ha-129707-m02 host status = "Stopped" (err=<nil>)
	I0927 01:00:46.164731  605516 status.go:377] host is not running, skipping remaining checks
	I0927 01:00:46.164739  605516 status.go:176] ha-129707-m02 status: &{Name:ha-129707-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 01:00:46.164759  605516 status.go:174] checking status of ha-129707-m03 ...
	I0927 01:00:46.165133  605516 cli_runner.go:164] Run: docker container inspect ha-129707-m03 --format={{.State.Status}}
	I0927 01:00:46.181665  605516 status.go:364] ha-129707-m03 host status = "Running" (err=<nil>)
	I0927 01:00:46.181705  605516 host.go:66] Checking if "ha-129707-m03" exists ...
	I0927 01:00:46.182089  605516 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-129707-m03
	I0927 01:00:46.199007  605516 host.go:66] Checking if "ha-129707-m03" exists ...
	I0927 01:00:46.199651  605516 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 01:00:46.200301  605516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-129707-m03
	I0927 01:00:46.217210  605516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33526 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/ha-129707-m03/id_rsa Username:docker}
	I0927 01:00:46.308126  605516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:00:46.319975  605516 kubeconfig.go:125] found "ha-129707" server: "https://192.168.49.254:8443"
	I0927 01:00:46.320007  605516 api_server.go:166] Checking apiserver status ...
	I0927 01:00:46.320048  605516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:00:46.330789  605516 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1349/cgroup
	I0927 01:00:46.340461  605516 api_server.go:182] apiserver freezer: "11:freezer:/docker/3779fe1579263066b1bf6ecddc01e2b1eea8c182ba513926838aa944eaebde3f/crio/crio-e42fd340eecabc6d8a1f656e005d7063be8945a46ebceca7be7e7aa6e9041438"
	I0927 01:00:46.340566  605516 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/3779fe1579263066b1bf6ecddc01e2b1eea8c182ba513926838aa944eaebde3f/crio/crio-e42fd340eecabc6d8a1f656e005d7063be8945a46ebceca7be7e7aa6e9041438/freezer.state
	I0927 01:00:46.349721  605516 api_server.go:204] freezer state: "THAWED"
	I0927 01:00:46.349760  605516 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0927 01:00:46.357541  605516 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0927 01:00:46.357588  605516 status.go:456] ha-129707-m03 apiserver status = Running (err=<nil>)
	I0927 01:00:46.357599  605516 status.go:176] ha-129707-m03 status: &{Name:ha-129707-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 01:00:46.357633  605516 status.go:174] checking status of ha-129707-m04 ...
	I0927 01:00:46.357956  605516 cli_runner.go:164] Run: docker container inspect ha-129707-m04 --format={{.State.Status}}
	I0927 01:00:46.375521  605516 status.go:364] ha-129707-m04 host status = "Running" (err=<nil>)
	I0927 01:00:46.375547  605516 host.go:66] Checking if "ha-129707-m04" exists ...
	I0927 01:00:46.375865  605516 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-129707-m04
	I0927 01:00:46.392920  605516 host.go:66] Checking if "ha-129707-m04" exists ...
	I0927 01:00:46.393230  605516 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 01:00:46.393277  605516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-129707-m04
	I0927 01:00:46.410663  605516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33531 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/ha-129707-m04/id_rsa Username:docker}
	I0927 01:00:46.499826  605516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:00:46.511818  605516 status.go:176] ha-129707-m04 status: &{Name:ha-129707-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (23.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 node start m02 -v=7 --alsologtostderr
E0927 01:00:48.516795  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/functional-506734/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:00:53.638923  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/functional-506734/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:01:03.880896  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/functional-506734/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-129707 node start m02 -v=7 --alsologtostderr: (21.235790463s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-129707 status -v=7 --alsologtostderr: (1.635912769s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (23.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.378613745s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (207.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-129707 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-129707 -v=7 --alsologtostderr
E0927 01:01:24.363156  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/functional-506734/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-129707 -v=7 --alsologtostderr: (37.117267748s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-129707 --wait=true -v=7 --alsologtostderr
E0927 01:02:05.324925  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/functional-506734/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:02:14.422868  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:02:42.124549  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:03:27.246825  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/functional-506734/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-129707 --wait=true -v=7 --alsologtostderr: (2m49.812335237s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-129707
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (207.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-129707 node delete m03 -v=7 --alsologtostderr: (11.364063182s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-129707 stop -v=7 --alsologtostderr: (35.705414875s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-129707 status -v=7 --alsologtostderr: exit status 7 (99.364937ms)

                                                
                                                
-- stdout --
	ha-129707
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-129707-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-129707-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 01:05:27.439082  620052 out.go:345] Setting OutFile to fd 1 ...
	I0927 01:05:27.439316  620052 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:05:27.439344  620052 out.go:358] Setting ErrFile to fd 2...
	I0927 01:05:27.439363  620052 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:05:27.439658  620052 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-553751/.minikube/bin
	I0927 01:05:27.439881  620052 out.go:352] Setting JSON to false
	I0927 01:05:27.439951  620052 mustload.go:65] Loading cluster: ha-129707
	I0927 01:05:27.440046  620052 notify.go:220] Checking for updates...
	I0927 01:05:27.440471  620052 config.go:182] Loaded profile config "ha-129707": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:05:27.440512  620052 status.go:174] checking status of ha-129707 ...
	I0927 01:05:27.441388  620052 cli_runner.go:164] Run: docker container inspect ha-129707 --format={{.State.Status}}
	I0927 01:05:27.459421  620052 status.go:364] ha-129707 host status = "Stopped" (err=<nil>)
	I0927 01:05:27.459441  620052 status.go:377] host is not running, skipping remaining checks
	I0927 01:05:27.459448  620052 status.go:176] ha-129707 status: &{Name:ha-129707 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 01:05:27.459487  620052 status.go:174] checking status of ha-129707-m02 ...
	I0927 01:05:27.459791  620052 cli_runner.go:164] Run: docker container inspect ha-129707-m02 --format={{.State.Status}}
	I0927 01:05:27.476534  620052 status.go:364] ha-129707-m02 host status = "Stopped" (err=<nil>)
	I0927 01:05:27.476553  620052 status.go:377] host is not running, skipping remaining checks
	I0927 01:05:27.476560  620052 status.go:176] ha-129707-m02 status: &{Name:ha-129707-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 01:05:27.476580  620052 status.go:174] checking status of ha-129707-m04 ...
	I0927 01:05:27.476896  620052 cli_runner.go:164] Run: docker container inspect ha-129707-m04 --format={{.State.Status}}
	I0927 01:05:27.492287  620052 status.go:364] ha-129707-m04 host status = "Stopped" (err=<nil>)
	I0927 01:05:27.492312  620052 status.go:377] host is not running, skipping remaining checks
	I0927 01:05:27.492320  620052 status.go:176] ha-129707-m04 status: &{Name:ha-129707-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (117.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-129707 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0927 01:05:43.385587  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/functional-506734/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:06:11.088505  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/functional-506734/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:07:14.423764  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-129707 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m57.078639866s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (117.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (70.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-129707 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-129707 --control-plane -v=7 --alsologtostderr: (1m9.535192324s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-129707 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (70.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.96s)

                                                
                                    
x
+
TestJSONOutput/start/Command (80.39s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-910541 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-910541 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m20.390658114s)
--- PASS: TestJSONOutput/start/Command (80.39s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-910541 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-910541 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.87s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-910541 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-910541 --output=json --user=testUser: (5.871593701s)
--- PASS: TestJSONOutput/stop/Command (5.87s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-032579 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-032579 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (64.751779ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"860916b3-96fe-49ad-bd97-9646dba3a718","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-032579] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fa650a9e-084b-47da-9553-0e59ea3d1688","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19711"}}
	{"specversion":"1.0","id":"da86ea12-65eb-42aa-9012-3cb3c58f80fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1205d2c0-ce54-4c4d-bb4a-e3bc3e1c4149","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19711-553751/kubeconfig"}}
	{"specversion":"1.0","id":"eca1c3a5-b44c-4af2-abc4-d3c07fe68d20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-553751/.minikube"}}
	{"specversion":"1.0","id":"6dabd8f9-c3b8-4c1c-a95b-ff6ebd0d2293","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"71a5461d-9c16-4e95-b3d9-bbb7a2994bd7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"29bbaebc-0549-4a55-bf57-3ece5d74fb4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-032579" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-032579
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.63s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-169925 --network=
E0927 01:10:43.385599  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/functional-506734/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-169925 --network=: (36.60008624s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-169925" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-169925
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-169925: (2.012901124s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.63s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.42s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-725662 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-725662 --network=bridge: (32.404972633s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-725662" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-725662
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-725662: (1.990468833s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.42s)

                                                
                                    
x
+
TestKicExistingNetwork (32s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0927 01:11:30.327872  559158 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0927 01:11:30.342362  559158 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0927 01:11:30.343356  559158 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0927 01:11:30.343398  559158 cli_runner.go:164] Run: docker network inspect existing-network
W0927 01:11:30.359564  559158 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0927 01:11:30.359597  559158 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0927 01:11:30.359617  559158 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0927 01:11:30.359734  559158 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0927 01:11:30.378892  559158 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-cc95616b5e30 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:3d:b8:64:40} reservation:<nil>}
I0927 01:11:30.379348  559158 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001cf3240}
I0927 01:11:30.379385  559158 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0927 01:11:30.379442  559158 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0927 01:11:30.447511  559158 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-539175 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-539175 --network=existing-network: (29.898165213s)
helpers_test.go:175: Cleaning up "existing-network-539175" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-539175
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-539175: (1.944865558s)
I0927 01:12:02.307115  559158 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (32.00s)

                                                
                                    
x
+
TestKicCustomSubnet (32.64s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-328539 --subnet=192.168.60.0/24
E0927 01:12:14.422770  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-328539 --subnet=192.168.60.0/24: (30.522227053s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-328539 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-328539" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-328539
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-328539: (2.097354775s)
--- PASS: TestKicCustomSubnet (32.64s)

                                                
                                    
x
+
TestKicStaticIP (33.26s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-556014 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-556014 --static-ip=192.168.200.200: (31.011259646s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-556014 ip
helpers_test.go:175: Cleaning up "static-ip-556014" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-556014
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-556014: (2.097249494s)
--- PASS: TestKicStaticIP (33.26s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (65.65s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-738876 --driver=docker  --container-runtime=crio
E0927 01:13:37.486386  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-738876 --driver=docker  --container-runtime=crio: (31.564626338s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-741649 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-741649 --driver=docker  --container-runtime=crio: (28.960221891s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-738876
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-741649
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-741649" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-741649
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-741649: (1.936853097s)
helpers_test.go:175: Cleaning up "first-738876" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-738876
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-738876: (1.901411744s)
--- PASS: TestMinikubeProfile (65.65s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.56s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-555564 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-555564 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.561004654s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.56s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-555564 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.47s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-557361 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-557361 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.4705059s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-557361 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.6s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-555564 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-555564 --alsologtostderr -v=5: (1.603426309s)
--- PASS: TestMountStart/serial/DeleteFirst (1.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-557361 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-557361
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-557361: (1.205444422s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.6s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-557361
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-557361: (6.599619311s)
--- PASS: TestMountStart/serial/RestartStopped (7.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-557361 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (135.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-568851 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0927 01:15:43.385574  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/functional-506734/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-568851 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (2m15.22071703s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-568851 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (135.69s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-568851 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-568851 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-568851 -- rollout status deployment/busybox: (4.493777798s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-568851 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-568851 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-568851 -- exec busybox-7dff88458-86ldk -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-568851 -- exec busybox-7dff88458-tjbdb -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-568851 -- exec busybox-7dff88458-86ldk -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-568851 -- exec busybox-7dff88458-tjbdb -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-568851 -- exec busybox-7dff88458-86ldk -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-568851 -- exec busybox-7dff88458-tjbdb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.37s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-568851 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-568851 -- exec busybox-7dff88458-86ldk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-568851 -- exec busybox-7dff88458-86ldk -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-568851 -- exec busybox-7dff88458-tjbdb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-568851 -- exec busybox-7dff88458-tjbdb -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (58.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-568851 -v 3 --alsologtostderr
E0927 01:17:06.450874  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/functional-506734/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:17:14.422860  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-568851 -v 3 --alsologtostderr: (57.692981071s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-568851 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (58.35s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-568851 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.65s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-568851 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-568851 cp testdata/cp-test.txt multinode-568851:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-568851 ssh -n multinode-568851 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-568851 cp multinode-568851:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4071147796/001/cp-test_multinode-568851.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-568851 ssh -n multinode-568851 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-568851 cp multinode-568851:/home/docker/cp-test.txt multinode-568851-m02:/home/docker/cp-test_multinode-568851_multinode-568851-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-568851 ssh -n multinode-568851 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-568851 ssh -n multinode-568851-m02 "sudo cat /home/docker/cp-test_multinode-568851_multinode-568851-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-568851 cp multinode-568851:/home/docker/cp-test.txt multinode-568851-m03:/home/docker/cp-test_multinode-568851_multinode-568851-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-568851 ssh -n multinode-568851 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-568851 ssh -n multinode-568851-m03 "sudo cat /home/docker/cp-test_multinode-568851_multinode-568851-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-568851 cp testdata/cp-test.txt multinode-568851-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-568851 ssh -n multinode-568851-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-568851 cp multinode-568851-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4071147796/001/cp-test_multinode-568851-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-568851 ssh -n multinode-568851-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-568851 cp multinode-568851-m02:/home/docker/cp-test.txt multinode-568851:/home/docker/cp-test_multinode-568851-m02_multinode-568851.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-568851 ssh -n multinode-568851-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-568851 ssh -n multinode-568851 "sudo cat /home/docker/cp-test_multinode-568851-m02_multinode-568851.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-568851 cp multinode-568851-m02:/home/docker/cp-test.txt multinode-568851-m03:/home/docker/cp-test_multinode-568851-m02_multinode-568851-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-568851 ssh -n multinode-568851-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-568851 ssh -n multinode-568851-m03 "sudo cat /home/docker/cp-test_multinode-568851-m02_multinode-568851-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-568851 cp testdata/cp-test.txt multinode-568851-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-568851 ssh -n multinode-568851-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-568851 cp multinode-568851-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4071147796/001/cp-test_multinode-568851-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-568851 ssh -n multinode-568851-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-568851 cp multinode-568851-m03:/home/docker/cp-test.txt multinode-568851:/home/docker/cp-test_multinode-568851-m03_multinode-568851.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-568851 ssh -n multinode-568851-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-568851 ssh -n multinode-568851 "sudo cat /home/docker/cp-test_multinode-568851-m03_multinode-568851.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-568851 cp multinode-568851-m03:/home/docker/cp-test.txt multinode-568851-m02:/home/docker/cp-test_multinode-568851-m03_multinode-568851-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-568851 ssh -n multinode-568851-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-568851 ssh -n multinode-568851-m02 "sudo cat /home/docker/cp-test_multinode-568851-m03_multinode-568851-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.40s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-568851 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-568851 node stop m03: (1.206792269s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-568851 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-568851 status: exit status 7 (482.704459ms)

                                                
                                                
-- stdout --
	multinode-568851
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-568851-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-568851-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-568851 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-568851 status --alsologtostderr: exit status 7 (482.607665ms)

                                                
                                                
-- stdout --
	multinode-568851
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-568851-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-568851-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 01:18:13.382136  673624 out.go:345] Setting OutFile to fd 1 ...
	I0927 01:18:13.382344  673624 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:18:13.382357  673624 out.go:358] Setting ErrFile to fd 2...
	I0927 01:18:13.382362  673624 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:18:13.382672  673624 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-553751/.minikube/bin
	I0927 01:18:13.382947  673624 out.go:352] Setting JSON to false
	I0927 01:18:13.382993  673624 mustload.go:65] Loading cluster: multinode-568851
	I0927 01:18:13.383034  673624 notify.go:220] Checking for updates...
	I0927 01:18:13.383485  673624 config.go:182] Loaded profile config "multinode-568851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:18:13.383795  673624 status.go:174] checking status of multinode-568851 ...
	I0927 01:18:13.385267  673624 cli_runner.go:164] Run: docker container inspect multinode-568851 --format={{.State.Status}}
	I0927 01:18:13.405672  673624 status.go:364] multinode-568851 host status = "Running" (err=<nil>)
	I0927 01:18:13.405707  673624 host.go:66] Checking if "multinode-568851" exists ...
	I0927 01:18:13.406018  673624 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-568851
	I0927 01:18:13.422243  673624 host.go:66] Checking if "multinode-568851" exists ...
	I0927 01:18:13.422560  673624 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 01:18:13.422623  673624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-568851
	I0927 01:18:13.448468  673624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33636 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/multinode-568851/id_rsa Username:docker}
	I0927 01:18:13.540690  673624 ssh_runner.go:195] Run: systemctl --version
	I0927 01:18:13.544899  673624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:18:13.556442  673624 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 01:18:13.615129  673624 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-27 01:18:13.604622498 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 01:18:13.615709  673624 kubeconfig.go:125] found "multinode-568851" server: "https://192.168.67.2:8443"
	I0927 01:18:13.615742  673624 api_server.go:166] Checking apiserver status ...
	I0927 01:18:13.615790  673624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 01:18:13.626904  673624 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1414/cgroup
	I0927 01:18:13.637148  673624 api_server.go:182] apiserver freezer: "11:freezer:/docker/f6f4a03b82ab2bb24b960f1c142c704509ee2bdd8f92d18818b46cdf01e01068/crio/crio-caeae109a303e6b09b5816165cb98c44e6184347b51b06fe9bd9ec4711d5df6a"
	I0927 01:18:13.637221  673624 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f6f4a03b82ab2bb24b960f1c142c704509ee2bdd8f92d18818b46cdf01e01068/crio/crio-caeae109a303e6b09b5816165cb98c44e6184347b51b06fe9bd9ec4711d5df6a/freezer.state
	I0927 01:18:13.646002  673624 api_server.go:204] freezer state: "THAWED"
	I0927 01:18:13.646029  673624 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0927 01:18:13.653580  673624 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0927 01:18:13.653607  673624 status.go:456] multinode-568851 apiserver status = Running (err=<nil>)
	I0927 01:18:13.653619  673624 status.go:176] multinode-568851 status: &{Name:multinode-568851 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 01:18:13.653639  673624 status.go:174] checking status of multinode-568851-m02 ...
	I0927 01:18:13.653953  673624 cli_runner.go:164] Run: docker container inspect multinode-568851-m02 --format={{.State.Status}}
	I0927 01:18:13.669978  673624 status.go:364] multinode-568851-m02 host status = "Running" (err=<nil>)
	I0927 01:18:13.670021  673624 host.go:66] Checking if "multinode-568851-m02" exists ...
	I0927 01:18:13.670320  673624 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-568851-m02
	I0927 01:18:13.686200  673624 host.go:66] Checking if "multinode-568851-m02" exists ...
	I0927 01:18:13.686511  673624 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 01:18:13.686562  673624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-568851-m02
	I0927 01:18:13.703426  673624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33641 SSHKeyPath:/home/jenkins/minikube-integration/19711-553751/.minikube/machines/multinode-568851-m02/id_rsa Username:docker}
	I0927 01:18:13.791899  673624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 01:18:13.803468  673624 status.go:176] multinode-568851-m02 status: &{Name:multinode-568851-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0927 01:18:13.803503  673624 status.go:174] checking status of multinode-568851-m03 ...
	I0927 01:18:13.803829  673624 cli_runner.go:164] Run: docker container inspect multinode-568851-m03 --format={{.State.Status}}
	I0927 01:18:13.819410  673624 status.go:364] multinode-568851-m03 host status = "Stopped" (err=<nil>)
	I0927 01:18:13.819434  673624 status.go:377] host is not running, skipping remaining checks
	I0927 01:18:13.819442  673624 status.go:176] multinode-568851-m03 status: &{Name:multinode-568851-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.17s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-568851 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-568851 node start m03 -v=7 --alsologtostderr: (8.887832666s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-568851 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.60s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (116.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-568851
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-568851
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-568851: (24.774867406s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-568851 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-568851 --wait=true -v=8 --alsologtostderr: (1m31.285793465s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-568851
--- PASS: TestMultiNode/serial/RestartKeepsNodes (116.18s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-568851 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-568851 node delete m03: (4.710782788s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-568851 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.35s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-568851 stop
E0927 01:20:43.386935  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/functional-506734/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-568851 stop: (23.657343714s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-568851 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-568851 status: exit status 7 (94.772994ms)

                                                
                                                
-- stdout --
	multinode-568851
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-568851-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-568851 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-568851 status --alsologtostderr: exit status 7 (91.401989ms)

                                                
                                                
-- stdout --
	multinode-568851
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-568851-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 01:20:48.751827  681412 out.go:345] Setting OutFile to fd 1 ...
	I0927 01:20:48.752022  681412 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:20:48.752052  681412 out.go:358] Setting ErrFile to fd 2...
	I0927 01:20:48.752073  681412 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:20:48.752329  681412 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-553751/.minikube/bin
	I0927 01:20:48.752538  681412 out.go:352] Setting JSON to false
	I0927 01:20:48.752595  681412 mustload.go:65] Loading cluster: multinode-568851
	I0927 01:20:48.752681  681412 notify.go:220] Checking for updates...
	I0927 01:20:48.753059  681412 config.go:182] Loaded profile config "multinode-568851": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:20:48.753109  681412 status.go:174] checking status of multinode-568851 ...
	I0927 01:20:48.753683  681412 cli_runner.go:164] Run: docker container inspect multinode-568851 --format={{.State.Status}}
	I0927 01:20:48.772722  681412 status.go:364] multinode-568851 host status = "Stopped" (err=<nil>)
	I0927 01:20:48.772742  681412 status.go:377] host is not running, skipping remaining checks
	I0927 01:20:48.772750  681412 status.go:176] multinode-568851 status: &{Name:multinode-568851 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 01:20:48.772773  681412 status.go:174] checking status of multinode-568851-m02 ...
	I0927 01:20:48.773071  681412 cli_runner.go:164] Run: docker container inspect multinode-568851-m02 --format={{.State.Status}}
	I0927 01:20:48.796612  681412 status.go:364] multinode-568851-m02 host status = "Stopped" (err=<nil>)
	I0927 01:20:48.796630  681412 status.go:377] host is not running, skipping remaining checks
	I0927 01:20:48.796644  681412 status.go:176] multinode-568851-m02 status: &{Name:multinode-568851-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.84s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (54.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-568851 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-568851 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (53.66174962s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-568851 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (54.31s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (32.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-568851
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-568851-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-568851-m02 --driver=docker  --container-runtime=crio: exit status 14 (105.492894ms)

                                                
                                                
-- stdout --
	* [multinode-568851-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19711-553751/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-553751/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-568851-m02' is duplicated with machine name 'multinode-568851-m02' in profile 'multinode-568851'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-568851-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-568851-m03 --driver=docker  --container-runtime=crio: (30.129498175s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-568851
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-568851: exit status 80 (309.271873ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-568851 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-568851-m03 already exists in multinode-568851-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-568851-m03
E0927 01:22:14.422891  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-568851-m03: (1.916272853s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (32.51s)

                                                
                                    
x
+
TestPreload (127.6s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-185995 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-185995 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m34.328888841s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-185995 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-185995 image pull gcr.io/k8s-minikube/busybox: (3.423135433s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-185995
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-185995: (5.799607054s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-185995 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-185995 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (21.432577351s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-185995 image list
helpers_test.go:175: Cleaning up "test-preload-185995" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-185995
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-185995: (2.356057412s)
--- PASS: TestPreload (127.60s)

                                                
                                    
x
+
TestScheduledStopUnix (104.53s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-397985 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-397985 --memory=2048 --driver=docker  --container-runtime=crio: (28.795466964s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-397985 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-397985 -n scheduled-stop-397985
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-397985 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0927 01:24:56.491542  559158 retry.go:31] will retry after 74.562µs: open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/scheduled-stop-397985/pid: no such file or directory
I0927 01:24:56.492694  559158 retry.go:31] will retry after 149.657µs: open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/scheduled-stop-397985/pid: no such file or directory
I0927 01:24:56.493833  559158 retry.go:31] will retry after 195.221µs: open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/scheduled-stop-397985/pid: no such file or directory
I0927 01:24:56.494928  559158 retry.go:31] will retry after 385.985µs: open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/scheduled-stop-397985/pid: no such file or directory
I0927 01:24:56.496049  559158 retry.go:31] will retry after 481.439µs: open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/scheduled-stop-397985/pid: no such file or directory
I0927 01:24:56.497166  559158 retry.go:31] will retry after 413.914µs: open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/scheduled-stop-397985/pid: no such file or directory
I0927 01:24:56.498287  559158 retry.go:31] will retry after 879.308µs: open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/scheduled-stop-397985/pid: no such file or directory
I0927 01:24:56.499421  559158 retry.go:31] will retry after 2.325464ms: open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/scheduled-stop-397985/pid: no such file or directory
I0927 01:24:56.502665  559158 retry.go:31] will retry after 2.575664ms: open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/scheduled-stop-397985/pid: no such file or directory
I0927 01:24:56.505895  559158 retry.go:31] will retry after 2.755783ms: open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/scheduled-stop-397985/pid: no such file or directory
I0927 01:24:56.509104  559158 retry.go:31] will retry after 7.092474ms: open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/scheduled-stop-397985/pid: no such file or directory
I0927 01:24:56.517338  559158 retry.go:31] will retry after 8.408929ms: open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/scheduled-stop-397985/pid: no such file or directory
I0927 01:24:56.526823  559158 retry.go:31] will retry after 11.652303ms: open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/scheduled-stop-397985/pid: no such file or directory
I0927 01:24:56.539053  559158 retry.go:31] will retry after 27.578368ms: open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/scheduled-stop-397985/pid: no such file or directory
I0927 01:24:56.567293  559158 retry.go:31] will retry after 36.925552ms: open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/scheduled-stop-397985/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-397985 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-397985 -n scheduled-stop-397985
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-397985
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-397985 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0927 01:25:43.385991  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/functional-506734/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-397985
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-397985: exit status 7 (68.397828ms)

                                                
                                                
-- stdout --
	scheduled-stop-397985
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-397985 -n scheduled-stop-397985
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-397985 -n scheduled-stop-397985: exit status 7 (65.100856ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-397985" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-397985
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-397985: (4.255593837s)
--- PASS: TestScheduledStopUnix (104.53s)

                                                
                                    
x
+
TestInsufficientStorage (10.14s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-094463 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-094463 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.695977247s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ec079491-bd67-41c0-8909-c17dd38aba43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-094463] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"24b7f11b-6f93-4d59-944a-68e506f3d28f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19711"}}
	{"specversion":"1.0","id":"7da1e805-ddf1-4f19-94c7-8a11d25c7489","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"843d036e-88ab-43f7-bfed-03b68132498a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19711-553751/kubeconfig"}}
	{"specversion":"1.0","id":"d0075712-6c05-430e-8bfa-f676ddea2356","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-553751/.minikube"}}
	{"specversion":"1.0","id":"bc75bd12-4b6a-4d19-816b-ea620d17cad7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"2f38dfb2-1542-40c6-8b75-dea614f91a7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"388d1ecc-87f6-4ad4-99e3-b92b7b287ec0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"e5e00b4f-1dcb-4b97-82e0-37c4b691730d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"801fb3b5-893c-4acf-97df-1c634cf80b74","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"36b31dbc-9d32-4810-bdd7-57034ebab4ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"aec6ab31-4177-447e-9cc4-ea32a943e052","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-094463\" primary control-plane node in \"insufficient-storage-094463\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f8c00b79-c918-4abb-9a16-33fbe43fc7ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1727108449-19696 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"cab5cc3f-a51c-48dd-99d5-b48fb0f8016c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"b8fb5df7-c9e8-4dd6-ba02-546f2650fab8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-094463 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-094463 --output=json --layout=cluster: exit status 7 (279.069465ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-094463","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-094463","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0927 01:26:19.704107  699059 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-094463" does not appear in /home/jenkins/minikube-integration/19711-553751/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-094463 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-094463 --output=json --layout=cluster: exit status 7 (270.665635ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-094463","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-094463","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0927 01:26:19.977516  699122 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-094463" does not appear in /home/jenkins/minikube-integration/19711-553751/kubeconfig
	E0927 01:26:19.987540  699122 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/insufficient-storage-094463/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-094463" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-094463
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-094463: (1.890602814s)
--- PASS: TestInsufficientStorage (10.14s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (69.44s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4152217353 start -p running-upgrade-412716 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0927 01:30:43.385906  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/functional-506734/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4152217353 start -p running-upgrade-412716 --memory=2200 --vm-driver=docker  --container-runtime=crio: (36.219384056s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-412716 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-412716 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.644099865s)
helpers_test.go:175: Cleaning up "running-upgrade-412716" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-412716
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-412716: (2.778717898s)
--- PASS: TestRunningBinaryUpgrade (69.44s)

                                                
                                    
x
+
TestKubernetesUpgrade (391.14s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-191611 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-191611 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m16.013983482s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-191611
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-191611: (1.516560181s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-191611 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-191611 status --format={{.Host}}: exit status 7 (111.029284ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-191611 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-191611 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m40.719424585s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-191611 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-191611 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-191611 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (117.64887ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-191611] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19711-553751/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-553751/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-191611
	    minikube start -p kubernetes-upgrade-191611 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1916112 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-191611 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-191611 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0927 01:33:46.453276  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/functional-506734/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-191611 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.551729964s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-191611" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-191611
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-191611: (2.975198637s)
--- PASS: TestKubernetesUpgrade (391.14s)

                                                
                                    
x
+
TestMissingContainerUpgrade (163.34s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2437865830 start -p missing-upgrade-300487 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2437865830 start -p missing-upgrade-300487 --memory=2200 --driver=docker  --container-runtime=crio: (1m24.977554948s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-300487
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-300487: (10.424768771s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-300487
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-300487 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-300487 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m4.607133751s)
helpers_test.go:175: Cleaning up "missing-upgrade-300487" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-300487
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-300487: (2.342371128s)
--- PASS: TestMissingContainerUpgrade (163.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-350660 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-350660 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (79.249858ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-350660] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19711-553751/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-553751/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (36.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-350660 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-350660 --driver=docker  --container-runtime=crio: (36.393854047s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-350660 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (36.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-350660 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-350660 --no-kubernetes --driver=docker  --container-runtime=crio: (5.592428974s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-350660 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-350660 status -o json: exit status 2 (285.210421ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-350660","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-350660
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-350660: (1.861846422s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-350660 --no-kubernetes --driver=docker  --container-runtime=crio
E0927 01:27:14.422879  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-350660 --no-kubernetes --driver=docker  --container-runtime=crio: (8.296257571s)
--- PASS: TestNoKubernetes/serial/Start (8.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-350660 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-350660 "sudo systemctl is-active --quiet service kubelet": exit status 1 (428.871141ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-arm64 profile list: (1.083514522s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-350660
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-350660: (1.244332604s)
--- PASS: TestNoKubernetes/serial/Stop (1.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-350660 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-350660 --driver=docker  --container-runtime=crio: (7.453061127s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-350660 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-350660 "sudo systemctl is-active --quiet service kubelet": exit status 1 (305.791786ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.94s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (80.52s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.4153147743 start -p stopped-upgrade-592124 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.4153147743 start -p stopped-upgrade-592124 --memory=2200 --vm-driver=docker  --container-runtime=crio: (33.52181182s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.4153147743 -p stopped-upgrade-592124 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.4153147743 -p stopped-upgrade-592124 stop: (2.750229356s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-592124 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0927 01:30:17.488415  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-592124 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (44.246431708s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (80.52s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.09s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-592124
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-592124: (1.088103301s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.09s)

                                                
                                    
x
+
TestPause/serial/Start (77.93s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-294049 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E0927 01:32:14.423071  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-294049 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m17.925579781s)
--- PASS: TestPause/serial/Start (77.93s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (17.5s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-294049 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-294049 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (17.454525101s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (17.50s)

                                                
                                    
x
+
TestPause/serial/Pause (1.23s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-294049 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-294049 --alsologtostderr -v=5: (1.232568716s)
--- PASS: TestPause/serial/Pause (1.23s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.45s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-294049 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-294049 --output=json --layout=cluster: exit status 2 (452.101397ms)

                                                
                                                
-- stdout --
	{"Name":"pause-294049","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-294049","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.45s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.24s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-294049 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-arm64 unpause -p pause-294049 --alsologtostderr -v=5: (1.241474021s)
--- PASS: TestPause/serial/Unpause (1.24s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.43s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-294049 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-294049 --alsologtostderr -v=5: (1.431569283s)
--- PASS: TestPause/serial/PauseAgain (1.43s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.95s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-294049 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-294049 --alsologtostderr -v=5: (2.952915267s)
--- PASS: TestPause/serial/DeletePaused (2.95s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.46s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-294049
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-294049: exit status 1 (15.136802ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-294049: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-075073 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-075073 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (311.757904ms)

                                                
                                                
-- stdout --
	* [false-075073] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19711-553751/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-553751/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 01:34:03.559742  739892 out.go:345] Setting OutFile to fd 1 ...
	I0927 01:34:03.559879  739892 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:34:03.559885  739892 out.go:358] Setting ErrFile to fd 2...
	I0927 01:34:03.559889  739892 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 01:34:03.560125  739892 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-553751/.minikube/bin
	I0927 01:34:03.560566  739892 out.go:352] Setting JSON to false
	I0927 01:34:03.561405  739892 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":18987,"bootTime":1727381857,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0927 01:34:03.561472  739892 start.go:139] virtualization:  
	I0927 01:34:03.587494  739892 out.go:177] * [false-075073] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0927 01:34:03.605826  739892 notify.go:220] Checking for updates...
	I0927 01:34:03.620787  739892 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 01:34:03.630129  739892 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 01:34:03.646386  739892 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-553751/kubeconfig
	I0927 01:34:03.655632  739892 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-553751/.minikube
	I0927 01:34:03.663781  739892 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0927 01:34:03.675305  739892 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 01:34:03.685255  739892 config.go:182] Loaded profile config "force-systemd-env-980399": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 01:34:03.685373  739892 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 01:34:03.705756  739892 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0927 01:34:03.705887  739892 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0927 01:34:03.777193  739892 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2024-09-27 01:34:03.760312204 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0927 01:34:03.777308  739892 docker.go:318] overlay module found
	I0927 01:34:03.782767  739892 out.go:177] * Using the docker driver based on user configuration
	I0927 01:34:03.791642  739892 start.go:297] selected driver: docker
	I0927 01:34:03.791670  739892 start.go:901] validating driver "docker" against <nil>
	I0927 01:34:03.791686  739892 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 01:34:03.800495  739892 out.go:201] 
	W0927 01:34:03.808674  739892 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0927 01:34:03.813312  739892 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-075073 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-075073

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-075073

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-075073

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-075073

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-075073

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-075073

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-075073

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-075073

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-075073

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-075073

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-075073"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-075073"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-075073"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-075073

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-075073"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-075073"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-075073" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-075073" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-075073" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-075073" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-075073" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-075073" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-075073" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-075073" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-075073"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-075073"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-075073"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-075073"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-075073"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-075073" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-075073" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-075073" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-075073"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-075073"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-075073"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-075073"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-075073"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-075073

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-075073"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-075073"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-075073"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-075073"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-075073"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-075073"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-075073"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-075073"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-075073"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-075073"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-075073"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-075073"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-075073"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-075073"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-075073"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-075073"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-075073"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-075073"

                                                
                                                
----------------------- debugLogs end: false-075073 [took: 4.839224254s] --------------------------------
helpers_test.go:175: Cleaning up "false-075073" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-075073
--- PASS: TestNetworkPlugins/group/false (5.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (164.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-745133 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0927 01:35:43.385521  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/functional-506734/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:37:14.423030  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-745133 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m44.639088241s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (164.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-745133 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [38b5bc27-22ff-4921-888d-91b89fd8decd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [38b5bc27-22ff-4921-888d-91b89fd8decd] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.005610069s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-745133 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (71.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-874305 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-874305 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m11.698890568s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (71.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-745133 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-745133 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.142576725s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-745133 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-745133 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-745133 --alsologtostderr -v=3: (13.480287617s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-745133 -n old-k8s-version-745133
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-745133 -n old-k8s-version-745133: exit status 7 (92.774439ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-745133 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-874305 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [692cfc08-b971-44a3-9561-a37d3cb54652] Pending
helpers_test.go:344: "busybox" [692cfc08-b971-44a3-9561-a37d3cb54652] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [692cfc08-b971-44a3-9561-a37d3cb54652] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.003326645s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-874305 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-874305 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-874305 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.028102163s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-874305 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-874305 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-874305 --alsologtostderr -v=3: (12.033068726s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-874305 -n no-preload-874305
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-874305 -n no-preload-874305: exit status 7 (68.198833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-874305 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (289.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-874305 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0927 01:40:43.386300  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/functional-506734/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:42:14.422799  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-874305 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m49.35884138s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-874305 -n no-preload-874305
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (289.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-w85cj" [6121d637-40fa-4c7e-9cf6-80f0520e0767] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00426643s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-w85cj" [6121d637-40fa-4c7e-9cf6-80f0520e0767] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005557055s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-874305 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-874305 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-874305 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-874305 --alsologtostderr -v=1: (1.007682816s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-874305 -n no-preload-874305
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-874305 -n no-preload-874305: exit status 2 (439.83803ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-874305 -n no-preload-874305
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-874305 -n no-preload-874305: exit status 2 (460.199645ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-874305 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-874305 -n no-preload-874305
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-874305 -n no-preload-874305
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-msqw2" [ae45a01d-201b-44a5-97c2-7946dc5f37cd] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004866477s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (94.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-310971 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-310971 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m34.469548251s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (94.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-msqw2" [ae45a01d-201b-44a5-97c2-7946dc5f37cd] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004074913s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-745133 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-745133 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-745133 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-745133 -n old-k8s-version-745133
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-745133 -n old-k8s-version-745133: exit status 2 (291.912324ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-745133 -n old-k8s-version-745133
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-745133 -n old-k8s-version-745133: exit status 2 (307.666436ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-745133 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-745133 -n old-k8s-version-745133
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-745133 -n old-k8s-version-745133
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-502801 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0927 01:45:43.386362  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/functional-506734/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-502801 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m25.089291502s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-310971 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [72c3f511-e9a7-478f-b92a-f7219bae62db] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [72c3f511-e9a7-478f-b92a-f7219bae62db] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004368585s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-310971 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-502801 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6a8e411a-5332-4ad4-9073-01dbaa749524] Pending
helpers_test.go:344: "busybox" [6a8e411a-5332-4ad4-9073-01dbaa749524] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6a8e411a-5332-4ad4-9073-01dbaa749524] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.00326452s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-502801 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-310971 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-310971 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-310971 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-310971 --alsologtostderr -v=3: (11.978606349s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-502801 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-502801 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-502801 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-502801 --alsologtostderr -v=3: (11.955411245s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-310971 -n embed-certs-310971
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-310971 -n embed-certs-310971: exit status 7 (65.75179ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-310971 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (268.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-310971 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0927 01:46:57.489696  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-310971 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m28.433478281s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-310971 -n embed-certs-310971
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (268.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-502801 -n default-k8s-diff-port-502801
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-502801 -n default-k8s-diff-port-502801: exit status 7 (82.218902ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-502801 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (304.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-502801 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0927 01:47:14.423677  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:48:03.116273  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/old-k8s-version-745133/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:48:03.122626  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/old-k8s-version-745133/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:48:03.134052  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/old-k8s-version-745133/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:48:03.155520  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/old-k8s-version-745133/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:48:03.196995  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/old-k8s-version-745133/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:48:03.278351  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/old-k8s-version-745133/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:48:03.439962  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/old-k8s-version-745133/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:48:03.761548  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/old-k8s-version-745133/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:48:04.403347  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/old-k8s-version-745133/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:48:05.685396  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/old-k8s-version-745133/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:48:08.246748  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/old-k8s-version-745133/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:48:13.368232  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/old-k8s-version-745133/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:48:23.609971  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/old-k8s-version-745133/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:48:44.092332  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/old-k8s-version-745133/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:49:24.839376  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/no-preload-874305/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:49:24.845977  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/no-preload-874305/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:49:24.857439  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/no-preload-874305/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:49:24.878898  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/no-preload-874305/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:49:24.920282  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/no-preload-874305/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:49:25.001770  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/no-preload-874305/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:49:25.054166  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/old-k8s-version-745133/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:49:25.163613  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/no-preload-874305/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:49:25.485214  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/no-preload-874305/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:49:26.126569  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/no-preload-874305/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:49:27.407939  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/no-preload-874305/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:49:29.970143  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/no-preload-874305/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:49:35.091856  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/no-preload-874305/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:49:45.333378  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/no-preload-874305/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:50:05.814865  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/no-preload-874305/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:50:26.455385  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/functional-506734/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:50:43.386204  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/functional-506734/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:50:46.777220  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/no-preload-874305/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:50:46.975564  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/old-k8s-version-745133/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-502801 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (5m3.716652166s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-502801 -n default-k8s-diff-port-502801
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (304.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-hh5zl" [4d6a9203-0cfc-44cd-9cd1-c70e4f01dfdc] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004848409s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-hh5zl" [4d6a9203-0cfc-44cd-9cd1-c70e4f01dfdc] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004336402s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-310971 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-310971 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-310971 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-310971 -n embed-certs-310971
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-310971 -n embed-certs-310971: exit status 2 (310.721196ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-310971 -n embed-certs-310971
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-310971 -n embed-certs-310971: exit status 2 (317.680986ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-310971 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-310971 -n embed-certs-310971
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-310971 -n embed-certs-310971
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (34.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-423438 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-423438 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (34.41548481s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (34.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-hlscf" [3e91fdd4-336e-4d8d-b6be-3b59238ef97b] Running
E0927 01:52:08.698701  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/no-preload-874305/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005414171s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-hlscf" [3e91fdd4-336e-4d8d-b6be-3b59238ef97b] Running
E0927 01:52:14.423229  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004254354s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-502801 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-423438 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-423438 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.095066855s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-502801 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-423438 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-423438 --alsologtostderr -v=3: (1.260079121s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-502801 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-502801 --alsologtostderr -v=1: (1.140266095s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-502801 -n default-k8s-diff-port-502801
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-502801 -n default-k8s-diff-port-502801: exit status 2 (381.646034ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-502801 -n default-k8s-diff-port-502801
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-502801 -n default-k8s-diff-port-502801: exit status 2 (368.036394ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-502801 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-502801 --alsologtostderr -v=1: (1.003714483s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-502801 -n default-k8s-diff-port-502801
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-502801 -n default-k8s-diff-port-502801
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-423438 -n newest-cni-423438
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-423438 -n newest-cni-423438: exit status 7 (81.037302ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-423438 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (19.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-423438 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-423438 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (18.701360054s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-423438 -n newest-cni-423438
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (19.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (57.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-075073 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-075073 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (57.895249082s)
--- PASS: TestNetworkPlugins/group/auto/Start (57.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-423438 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-423438 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p newest-cni-423438 --alsologtostderr -v=1: (1.077051063s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-423438 -n newest-cni-423438
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-423438 -n newest-cni-423438: exit status 2 (367.326286ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-423438 -n newest-cni-423438
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-423438 -n newest-cni-423438: exit status 2 (422.393295ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-423438 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p newest-cni-423438 --alsologtostderr -v=1: (1.00775617s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-423438 -n newest-cni-423438
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-423438 -n newest-cni-423438
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.91s)
E0927 01:57:58.463129  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/default-k8s-diff-port-502801/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:58:03.116206  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/old-k8s-version-745133/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:58:21.848975  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/auto-075073/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:58:21.855382  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/auto-075073/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:58:21.866764  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/auto-075073/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:58:21.888115  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/auto-075073/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:58:21.929596  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/auto-075073/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:58:22.011057  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/auto-075073/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:58:22.172576  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/auto-075073/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:58:22.494701  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/auto-075073/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:58:23.136537  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/auto-075073/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:58:24.417868  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/auto-075073/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:58:26.979134  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/auto-075073/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:58:32.100779  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/auto-075073/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:58:42.342272  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/auto-075073/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (81.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-075073 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0927 01:53:03.116900  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/old-k8s-version-745133/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-075073 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m21.128825501s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (81.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-075073 "pgrep -a kubelet"
I0927 01:53:21.541339  559158 config.go:182] Loaded profile config "auto-075073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-075073 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-bjbz7" [1c0e1c1f-37a5-47c3-a496-e403a21837db] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-bjbz7" [1c0e1c1f-37a5-47c3-a496-e403a21837db] Running
E0927 01:53:30.817248  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/old-k8s-version-745133/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004043877s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-075073 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-075073 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-075073 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (63.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-075073 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-075073 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m3.470037s)
--- PASS: TestNetworkPlugins/group/calico/Start (63.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-6r7tw" [3dc6da19-da43-4ce6-ba1a-a71aefd8448f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004523119s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-075073 "pgrep -a kubelet"
I0927 01:54:11.354779  559158 config.go:182] Loaded profile config "kindnet-075073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-075073 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6bzrx" [71985e26-26f4-40cf-8bea-a57684e46d01] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6bzrx" [71985e26-26f4-40cf-8bea-a57684e46d01] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.003682999s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-075073 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-075073 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-075073 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (59.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-075073 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0927 01:54:52.540174  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/no-preload-874305/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-075073 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (59.430732007s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (59.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-f8ft2" [50bb54a2-ce52-441c-a912-0db2ba658b23] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.007445321s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-075073 "pgrep -a kubelet"
I0927 01:55:00.881658  559158 config.go:182] Loaded profile config "calico-075073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-075073 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-97fr5" [2f8d1ea1-ba0b-420f-b907-3170894e75d5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-97fr5" [2f8d1ea1-ba0b-420f-b907-3170894e75d5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.004810058s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-075073 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-075073 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-075073 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (79.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-075073 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E0927 01:55:43.385860  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/functional-506734/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-075073 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m19.496920969s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (79.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-075073 "pgrep -a kubelet"
I0927 01:55:47.888846  559158 config.go:182] Loaded profile config "custom-flannel-075073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-075073 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-54vlc" [a56e8cce-a4fd-4214-ab49-5fbfcd8c2a65] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-54vlc" [a56e8cce-a4fd-4214-ab49-5fbfcd8c2a65] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.006132079s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-075073 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-075073 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-075073 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (42.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-075073 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0927 01:56:36.522929  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/default-k8s-diff-port-502801/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:56:36.529258  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/default-k8s-diff-port-502801/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:56:36.540594  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/default-k8s-diff-port-502801/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:56:36.561902  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/default-k8s-diff-port-502801/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:56:36.603251  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/default-k8s-diff-port-502801/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:56:36.684609  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/default-k8s-diff-port-502801/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:56:36.846789  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/default-k8s-diff-port-502801/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:56:37.168455  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/default-k8s-diff-port-502801/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:56:37.810627  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/default-k8s-diff-port-502801/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:56:39.091954  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/default-k8s-diff-port-502801/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:56:41.653298  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/default-k8s-diff-port-502801/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:56:46.775184  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/default-k8s-diff-port-502801/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:56:57.017374  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/default-k8s-diff-port-502801/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-075073 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (42.840659631s)
--- PASS: TestNetworkPlugins/group/flannel/Start (42.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-075073 "pgrep -a kubelet"
I0927 01:56:58.598424  559158 config.go:182] Loaded profile config "enable-default-cni-075073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-075073 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8z6t7" [fedaadfa-d3ee-412b-8607-c9d984dcb026] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8z6t7" [fedaadfa-d3ee-412b-8607-c9d984dcb026] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004197832s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-cj9tw" [baeae984-1372-4401-a80c-9ce348dc36a4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004240773s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-075073 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-075073 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-075073 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-075073 "pgrep -a kubelet"
I0927 01:57:12.535457  559158 config.go:182] Loaded profile config "flannel-075073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-075073 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-r9fzv" [5eac59ec-ff64-4eca-b34d-8ee845d8dfa3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0927 01:57:14.423666  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/addons-220192/client.crt: no such file or directory" logger="UnhandledError"
E0927 01:57:17.501743  559158 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19711-553751/.minikube/profiles/default-k8s-diff-port-502801/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-r9fzv" [5eac59ec-ff64-4eca-b34d-8ee845d8dfa3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.00463005s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-075073 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-075073 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-075073 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (74.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-075073 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-075073 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m14.032485481s)
--- PASS: TestNetworkPlugins/group/bridge/Start (74.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-075073 "pgrep -a kubelet"
I0927 01:58:44.966277  559158 config.go:182] Loaded profile config "bridge-075073": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-075073 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-s7wwf" [51f05bc1-59dc-4985-9ca6-7352c2fec85b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-s7wwf" [51f05bc1-59dc-4985-9ca6-7352c2fec85b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003206143s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-075073 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-075073 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-075073 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (29/327)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.53s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-575684 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-575684" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-575684
--- SKIP: TestDownloadOnlyKic (0.53s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:817: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-085329" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-085329
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-075073 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-075073

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-075073

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-075073

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-075073

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-075073

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-075073

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-075073

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-075073

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-075073

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-075073

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-075073"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-075073"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-075073"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-075073

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-075073"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-075073"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-075073" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-075073" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-075073" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-075073" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-075073" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-075073" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-075073" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-075073" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-075073"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-075073"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-075073"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-075073"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-075073"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-075073" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-075073" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-075073" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-075073"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-075073"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-075073"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-075073"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-075073"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-075073

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-075073"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-075073"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-075073"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-075073"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-075073"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-075073"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-075073"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-075073"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-075073"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-075073"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-075073"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-075073"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-075073"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-075073"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-075073"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-075073"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-075073"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-075073"

                                                
                                                
----------------------- debugLogs end: kubenet-075073 [took: 4.536830582s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-075073" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-075073
--- SKIP: TestNetworkPlugins/group/kubenet (4.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-075073 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-075073

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-075073

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-075073

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-075073

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-075073

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-075073

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-075073

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-075073

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-075073

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-075073

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-075073"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-075073"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-075073"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-075073

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-075073"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-075073"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-075073" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-075073" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-075073" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-075073" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-075073" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-075073" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-075073" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-075073" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-075073"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-075073"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-075073"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-075073"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-075073"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-075073

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-075073

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-075073" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-075073" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-075073

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-075073

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-075073" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-075073" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-075073" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-075073" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-075073" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-075073"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-075073"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-075073"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-075073"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-075073"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-075073

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-075073"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-075073"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-075073"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-075073"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-075073"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-075073"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-075073"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-075073"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-075073"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-075073"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-075073"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-075073"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-075073"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-075073"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-075073"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-075073"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-075073"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-075073" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-075073"

                                                
                                                
----------------------- debugLogs end: cilium-075073 [took: 4.807096211s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-075073" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-075073
--- SKIP: TestNetworkPlugins/group/cilium (4.99s)

                                                
                                    
Copied to clipboard