Test Report: Docker_Linux_crio_arm64 19664

                    
                      b0eadc949d6b6708e1f550519f8385f72d7fe4f5:2024-09-19:36285
                    
                

Test fail (4/328)

Order failed test Duration
33 TestAddons/parallel/Registry 75.1
34 TestAddons/parallel/Ingress 151.27
36 TestAddons/parallel/MetricsServer 327.48
174 TestMultiControlPlane/serial/RestartCluster 128.91
x
+
TestAddons/parallel/Registry (75.1s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 8.279304ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
I0919 18:51:21.766670  292666 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0919 18:51:21.766702  292666 kapi.go:107] duration metric: took 11.739272ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
helpers_test.go:344: "registry-66c9cd494c-zjfvp" [95228612-f951-44f9-ac40-a54760497790] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003518349s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-mn6mx" [384cadea-3e7f-4b57-8edb-f51b9f4dde24] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003280951s
addons_test.go:342: (dbg) Run:  kubectl --context addons-971880 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-971880 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-971880 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.132195624s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-971880 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-971880 ip
2024/09/19 18:52:33 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-971880 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-971880
helpers_test.go:235: (dbg) docker inspect addons-971880:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "656ffd17b558f4d4a2b9c0de0ee5ab8e55fb64f478d3837e93c1d9738183d057",
	        "Created": "2024-09-19T18:40:21.693648884Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 294019,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-19T18:40:21.83370316Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f8be4f9f9351784955e36c0e64d55ad19451839d9f6d0c057285eb8f9072963b",
	        "ResolvConfPath": "/var/lib/docker/containers/656ffd17b558f4d4a2b9c0de0ee5ab8e55fb64f478d3837e93c1d9738183d057/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/656ffd17b558f4d4a2b9c0de0ee5ab8e55fb64f478d3837e93c1d9738183d057/hostname",
	        "HostsPath": "/var/lib/docker/containers/656ffd17b558f4d4a2b9c0de0ee5ab8e55fb64f478d3837e93c1d9738183d057/hosts",
	        "LogPath": "/var/lib/docker/containers/656ffd17b558f4d4a2b9c0de0ee5ab8e55fb64f478d3837e93c1d9738183d057/656ffd17b558f4d4a2b9c0de0ee5ab8e55fb64f478d3837e93c1d9738183d057-json.log",
	        "Name": "/addons-971880",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-971880:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-971880",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a5a48ba09a5069601d04fdb6696ee56985bb23840413df00b3b65bd12d552997-init/diff:/var/lib/docker/overlay2/01d9e9e08c815432b8994f686c30467e8ad0d2e87cf6790233377a53c691e8f4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a5a48ba09a5069601d04fdb6696ee56985bb23840413df00b3b65bd12d552997/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a5a48ba09a5069601d04fdb6696ee56985bb23840413df00b3b65bd12d552997/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a5a48ba09a5069601d04fdb6696ee56985bb23840413df00b3b65bd12d552997/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-971880",
	                "Source": "/var/lib/docker/volumes/addons-971880/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-971880",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-971880",
	                "name.minikube.sigs.k8s.io": "addons-971880",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8401fb271cde0fae79ea1c883e095a5f34d887cc56bfc81485e9925601a92a9a",
	            "SandboxKey": "/var/run/docker/netns/8401fb271cde",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-971880": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "d62f700a78daed261ed14f4bb32a66890d0b280b5d5a72af727d194426d28141",
	                    "EndpointID": "e792600fa39aac0b873f2e9aacc195668339c4f184c5b304571be40ad512fdb9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-971880",
	                        "656ffd17b558"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-971880 -n addons-971880
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-971880 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-971880 logs -n 25: (1.609953344s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-975733   | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC |                     |
	|         | -p download-only-975733              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| delete  | -p download-only-975733              | download-only-975733   | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| start   | -o=json --download-only              | download-only-217912   | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC |                     |
	|         | -p download-only-217912              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:40 UTC |
	| delete  | -p download-only-217912              | download-only-217912   | jenkins | v1.34.0 | 19 Sep 24 18:40 UTC | 19 Sep 24 18:40 UTC |
	| delete  | -p download-only-975733              | download-only-975733   | jenkins | v1.34.0 | 19 Sep 24 18:40 UTC | 19 Sep 24 18:40 UTC |
	| delete  | -p download-only-217912              | download-only-217912   | jenkins | v1.34.0 | 19 Sep 24 18:40 UTC | 19 Sep 24 18:40 UTC |
	| start   | --download-only -p                   | download-docker-592744 | jenkins | v1.34.0 | 19 Sep 24 18:40 UTC |                     |
	|         | download-docker-592744               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p download-docker-592744            | download-docker-592744 | jenkins | v1.34.0 | 19 Sep 24 18:40 UTC | 19 Sep 24 18:40 UTC |
	| start   | --download-only -p                   | binary-mirror-388144   | jenkins | v1.34.0 | 19 Sep 24 18:40 UTC |                     |
	|         | binary-mirror-388144                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:33855               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-388144              | binary-mirror-388144   | jenkins | v1.34.0 | 19 Sep 24 18:40 UTC | 19 Sep 24 18:40 UTC |
	| addons  | enable dashboard -p                  | addons-971880          | jenkins | v1.34.0 | 19 Sep 24 18:40 UTC |                     |
	|         | addons-971880                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-971880          | jenkins | v1.34.0 | 19 Sep 24 18:40 UTC |                     |
	|         | addons-971880                        |                        |         |         |                     |                     |
	| start   | -p addons-971880 --wait=true         | addons-971880          | jenkins | v1.34.0 | 19 Sep 24 18:40 UTC | 19 Sep 24 18:43 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| addons  | addons-971880 addons                 | addons-971880          | jenkins | v1.34.0 | 19 Sep 24 18:52 UTC | 19 Sep 24 18:52 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-971880 addons                 | addons-971880          | jenkins | v1.34.0 | 19 Sep 24 18:52 UTC | 19 Sep 24 18:52 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| ip      | addons-971880 ip                     | addons-971880          | jenkins | v1.34.0 | 19 Sep 24 18:52 UTC | 19 Sep 24 18:52 UTC |
	| addons  | addons-971880 addons disable         | addons-971880          | jenkins | v1.34.0 | 19 Sep 24 18:52 UTC | 19 Sep 24 18:52 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/19 18:40:14
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 18:40:14.795022  293537 out.go:345] Setting OutFile to fd 1 ...
	I0919 18:40:14.795209  293537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:40:14.795239  293537 out.go:358] Setting ErrFile to fd 2...
	I0919 18:40:14.795263  293537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:40:14.795520  293537 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-287261/.minikube/bin
	I0919 18:40:14.796051  293537 out.go:352] Setting JSON to false
	I0919 18:40:14.796950  293537 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8547,"bootTime":1726762668,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0919 18:40:14.797050  293537 start.go:139] virtualization:  
	I0919 18:40:14.799511  293537 out.go:177] * [addons-971880] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0919 18:40:14.802404  293537 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 18:40:14.802594  293537 notify.go:220] Checking for updates...
	I0919 18:40:14.806697  293537 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 18:40:14.809013  293537 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19664-287261/kubeconfig
	I0919 18:40:14.810889  293537 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-287261/.minikube
	I0919 18:40:14.813452  293537 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0919 18:40:14.815382  293537 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 18:40:14.817599  293537 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 18:40:14.840916  293537 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0919 18:40:14.841034  293537 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 18:40:14.895857  293537 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:54 SystemTime:2024-09-19 18:40:14.88564199 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0919 18:40:14.895981  293537 docker.go:318] overlay module found
	I0919 18:40:14.898681  293537 out.go:177] * Using the docker driver based on user configuration
	I0919 18:40:14.900591  293537 start.go:297] selected driver: docker
	I0919 18:40:14.900609  293537 start.go:901] validating driver "docker" against <nil>
	I0919 18:40:14.900622  293537 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 18:40:14.901261  293537 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 18:40:14.949650  293537 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:54 SystemTime:2024-09-19 18:40:14.940202371 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0919 18:40:14.949868  293537 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 18:40:14.950096  293537 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 18:40:14.952238  293537 out.go:177] * Using Docker driver with root privileges
	I0919 18:40:14.954169  293537 cni.go:84] Creating CNI manager for ""
	I0919 18:40:14.954244  293537 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 18:40:14.954258  293537 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 18:40:14.954352  293537 start.go:340] cluster config:
	{Name:addons-971880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-971880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 18:40:14.957664  293537 out.go:177] * Starting "addons-971880" primary control-plane node in "addons-971880" cluster
	I0919 18:40:14.959288  293537 cache.go:121] Beginning downloading kic base image for docker with crio
	I0919 18:40:14.961126  293537 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0919 18:40:14.962695  293537 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 18:40:14.962751  293537 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19664-287261/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I0919 18:40:14.962778  293537 cache.go:56] Caching tarball of preloaded images
	I0919 18:40:14.962775  293537 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0919 18:40:14.962860  293537 preload.go:172] Found /home/jenkins/minikube-integration/19664-287261/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0919 18:40:14.962870  293537 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0919 18:40:14.963218  293537 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/config.json ...
	I0919 18:40:14.963237  293537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/config.json: {Name:mkdcb27e8211740d95283674cbbbe61d3cf7cd3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:14.982197  293537 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon, skipping pull
	I0919 18:40:14.982222  293537 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in daemon, skipping load
	I0919 18:40:14.982238  293537 cache.go:194] Successfully downloaded all kic artifacts
	I0919 18:40:14.982271  293537 start.go:360] acquireMachinesLock for addons-971880: {Name:mk9a87d1a88ed96332d84a90b344d67278fbcfbb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 18:40:14.982383  293537 start.go:364] duration metric: took 90.97µs to acquireMachinesLock for "addons-971880"
	I0919 18:40:14.982415  293537 start.go:93] Provisioning new machine with config: &{Name:addons-971880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-971880 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 18:40:14.982485  293537 start.go:125] createHost starting for "" (driver="docker")
	I0919 18:40:14.985182  293537 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0919 18:40:14.985446  293537 start.go:159] libmachine.API.Create for "addons-971880" (driver="docker")
	I0919 18:40:14.985494  293537 client.go:168] LocalClient.Create starting
	I0919 18:40:14.985608  293537 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca.pem
	I0919 18:40:15.651179  293537 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/cert.pem
	I0919 18:40:16.244767  293537 cli_runner.go:164] Run: docker network inspect addons-971880 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 18:40:16.259573  293537 cli_runner.go:211] docker network inspect addons-971880 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 18:40:16.259663  293537 network_create.go:284] running [docker network inspect addons-971880] to gather additional debugging logs...
	I0919 18:40:16.259686  293537 cli_runner.go:164] Run: docker network inspect addons-971880
	W0919 18:40:16.278892  293537 cli_runner.go:211] docker network inspect addons-971880 returned with exit code 1
	I0919 18:40:16.278928  293537 network_create.go:287] error running [docker network inspect addons-971880]: docker network inspect addons-971880: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-971880 not found
	I0919 18:40:16.278941  293537 network_create.go:289] output of [docker network inspect addons-971880]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-971880 not found
	
	** /stderr **
	I0919 18:40:16.279047  293537 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 18:40:16.293226  293537 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001753420}
	I0919 18:40:16.293268  293537 network_create.go:124] attempt to create docker network addons-971880 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0919 18:40:16.293334  293537 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-971880 addons-971880
	I0919 18:40:16.363897  293537 network_create.go:108] docker network addons-971880 192.168.49.0/24 created
	I0919 18:40:16.363930  293537 kic.go:121] calculated static IP "192.168.49.2" for the "addons-971880" container
	I0919 18:40:16.364004  293537 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 18:40:16.380178  293537 cli_runner.go:164] Run: docker volume create addons-971880 --label name.minikube.sigs.k8s.io=addons-971880 --label created_by.minikube.sigs.k8s.io=true
	I0919 18:40:16.395244  293537 oci.go:103] Successfully created a docker volume addons-971880
	I0919 18:40:16.395327  293537 cli_runner.go:164] Run: docker run --rm --name addons-971880-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-971880 --entrypoint /usr/bin/test -v addons-971880:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib
	I0919 18:40:17.535557  293537 cli_runner.go:217] Completed: docker run --rm --name addons-971880-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-971880 --entrypoint /usr/bin/test -v addons-971880:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib: (1.140188795s)
	I0919 18:40:17.535586  293537 oci.go:107] Successfully prepared a docker volume addons-971880
	I0919 18:40:17.535611  293537 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 18:40:17.535632  293537 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 18:40:17.535690  293537 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19664-287261/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-971880:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 18:40:21.621921  293537 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19664-287261/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-971880:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir: (4.086185357s)
	I0919 18:40:21.621955  293537 kic.go:203] duration metric: took 4.086318543s to extract preloaded images to volume ...
	W0919 18:40:21.622102  293537 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0919 18:40:21.622210  293537 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 18:40:21.679227  293537 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-971880 --name addons-971880 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-971880 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-971880 --network addons-971880 --ip 192.168.49.2 --volume addons-971880:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4
	I0919 18:40:22.007220  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Running}}
	I0919 18:40:22.032291  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:22.055098  293537 cli_runner.go:164] Run: docker exec addons-971880 stat /var/lib/dpkg/alternatives/iptables
	I0919 18:40:22.125415  293537 oci.go:144] the created container "addons-971880" has a running status.
	I0919 18:40:22.125445  293537 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa...
	I0919 18:40:22.576988  293537 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 18:40:22.615973  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:22.638224  293537 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 18:40:22.638243  293537 kic_runner.go:114] Args: [docker exec --privileged addons-971880 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 18:40:22.722473  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:22.742554  293537 machine.go:93] provisionDockerMachine start ...
	I0919 18:40:22.743352  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:22.774687  293537 main.go:141] libmachine: Using SSH client type: native
	I0919 18:40:22.774949  293537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0919 18:40:22.774959  293537 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 18:40:22.948505  293537 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-971880
	
	I0919 18:40:22.948580  293537 ubuntu.go:169] provisioning hostname "addons-971880"
	I0919 18:40:22.948677  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:22.969896  293537 main.go:141] libmachine: Using SSH client type: native
	I0919 18:40:22.970140  293537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0919 18:40:22.970160  293537 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-971880 && echo "addons-971880" | sudo tee /etc/hostname
	I0919 18:40:23.142085  293537 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-971880
	
	I0919 18:40:23.142233  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:23.173045  293537 main.go:141] libmachine: Using SSH client type: native
	I0919 18:40:23.173282  293537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0919 18:40:23.173299  293537 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-971880' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-971880/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-971880' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 18:40:23.320150  293537 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 18:40:23.320184  293537 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19664-287261/.minikube CaCertPath:/home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19664-287261/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19664-287261/.minikube}
	I0919 18:40:23.320208  293537 ubuntu.go:177] setting up certificates
	I0919 18:40:23.320217  293537 provision.go:84] configureAuth start
	I0919 18:40:23.320288  293537 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-971880
	I0919 18:40:23.336724  293537 provision.go:143] copyHostCerts
	I0919 18:40:23.336810  293537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19664-287261/.minikube/ca.pem (1082 bytes)
	I0919 18:40:23.336932  293537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19664-287261/.minikube/cert.pem (1123 bytes)
	I0919 18:40:23.337048  293537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19664-287261/.minikube/key.pem (1675 bytes)
	I0919 18:40:23.337107  293537 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19664-287261/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca-key.pem org=jenkins.addons-971880 san=[127.0.0.1 192.168.49.2 addons-971880 localhost minikube]
	I0919 18:40:23.784639  293537 provision.go:177] copyRemoteCerts
	I0919 18:40:23.784720  293537 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 18:40:23.784763  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:23.802489  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:23.909246  293537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 18:40:23.934171  293537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 18:40:23.958543  293537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 18:40:23.982904  293537 provision.go:87] duration metric: took 662.664687ms to configureAuth
	I0919 18:40:23.982931  293537 ubuntu.go:193] setting minikube options for container-runtime
	I0919 18:40:23.983122  293537 config.go:182] Loaded profile config "addons-971880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 18:40:23.983236  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:24.012307  293537 main.go:141] libmachine: Using SSH client type: native
	I0919 18:40:24.012571  293537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0919 18:40:24.012592  293537 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 18:40:24.296885  293537 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 18:40:24.296910  293537 machine.go:96] duration metric: took 1.554333983s to provisionDockerMachine
	I0919 18:40:24.296921  293537 client.go:171] duration metric: took 9.31141665s to LocalClient.Create
	I0919 18:40:24.296935  293537 start.go:167] duration metric: took 9.311489709s to libmachine.API.Create "addons-971880"
	I0919 18:40:24.296951  293537 start.go:293] postStartSetup for "addons-971880" (driver="docker")
	I0919 18:40:24.296965  293537 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 18:40:24.297040  293537 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 18:40:24.297084  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:24.314189  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:24.421363  293537 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 18:40:24.424465  293537 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 18:40:24.424502  293537 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 18:40:24.424514  293537 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 18:40:24.424521  293537 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 18:40:24.424532  293537 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-287261/.minikube/addons for local assets ...
	I0919 18:40:24.424607  293537 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-287261/.minikube/files for local assets ...
	I0919 18:40:24.424637  293537 start.go:296] duration metric: took 127.676808ms for postStartSetup
	I0919 18:40:24.424947  293537 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-971880
	I0919 18:40:24.441276  293537 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/config.json ...
	I0919 18:40:24.441573  293537 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 18:40:24.441628  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:24.457539  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:24.557015  293537 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 18:40:24.561316  293537 start.go:128] duration metric: took 9.578811258s to createHost
	I0919 18:40:24.561341  293537 start.go:83] releasing machines lock for "addons-971880", held for 9.578944592s
	I0919 18:40:24.561411  293537 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-971880
	I0919 18:40:24.576931  293537 ssh_runner.go:195] Run: cat /version.json
	I0919 18:40:24.576990  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:24.576994  293537 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 18:40:24.577069  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:24.594043  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:24.600367  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:24.825657  293537 ssh_runner.go:195] Run: systemctl --version
	I0919 18:40:24.829981  293537 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 18:40:24.973384  293537 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 18:40:24.977678  293537 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 18:40:24.998966  293537 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0919 18:40:24.999140  293537 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 18:40:25.045694  293537 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 18:40:25.045717  293537 start.go:495] detecting cgroup driver to use...
	I0919 18:40:25.045766  293537 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0919 18:40:25.045818  293537 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 18:40:25.065419  293537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 18:40:25.077859  293537 docker.go:217] disabling cri-docker service (if available) ...
	I0919 18:40:25.077968  293537 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 18:40:25.094706  293537 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 18:40:25.112860  293537 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 18:40:25.209683  293537 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 18:40:25.302151  293537 docker.go:233] disabling docker service ...
	I0919 18:40:25.302273  293537 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 18:40:25.323334  293537 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 18:40:25.336378  293537 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 18:40:25.429738  293537 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 18:40:25.535609  293537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 18:40:25.547524  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 18:40:25.564274  293537 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0919 18:40:25.564345  293537 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:40:25.574971  293537 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 18:40:25.575106  293537 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:40:25.586035  293537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:40:25.596962  293537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:40:25.607358  293537 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 18:40:25.617457  293537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:40:25.627519  293537 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:40:25.643763  293537 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:40:25.653582  293537 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 18:40:25.662617  293537 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 18:40:25.671391  293537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 18:40:25.758584  293537 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 18:40:25.881679  293537 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 18:40:25.881797  293537 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 18:40:25.885692  293537 start.go:563] Will wait 60s for crictl version
	I0919 18:40:25.885756  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:40:25.889290  293537 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 18:40:25.931872  293537 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0919 18:40:25.932001  293537 ssh_runner.go:195] Run: crio --version
	I0919 18:40:25.972764  293537 ssh_runner.go:195] Run: crio --version
	I0919 18:40:26.020911  293537 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0919 18:40:26.023368  293537 cli_runner.go:164] Run: docker network inspect addons-971880 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 18:40:26.039908  293537 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 18:40:26.044177  293537 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 18:40:26.057328  293537 kubeadm.go:883] updating cluster {Name:addons-971880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-971880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 18:40:26.057469  293537 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 18:40:26.057534  293537 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 18:40:26.133555  293537 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 18:40:26.133583  293537 crio.go:433] Images already preloaded, skipping extraction
	I0919 18:40:26.133643  293537 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 18:40:26.173236  293537 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 18:40:26.173261  293537 cache_images.go:84] Images are preloaded, skipping loading
	I0919 18:40:26.173270  293537 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0919 18:40:26.173424  293537 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-971880 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-971880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 18:40:26.173545  293537 ssh_runner.go:195] Run: crio config
	I0919 18:40:26.220780  293537 cni.go:84] Creating CNI manager for ""
	I0919 18:40:26.220804  293537 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 18:40:26.220815  293537 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 18:40:26.220841  293537 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-971880 NodeName:addons-971880 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 18:40:26.220981  293537 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-971880"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 18:40:26.221063  293537 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0919 18:40:26.230055  293537 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 18:40:26.230128  293537 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 18:40:26.239075  293537 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0919 18:40:26.257194  293537 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 18:40:26.275405  293537 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0919 18:40:26.294207  293537 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0919 18:40:26.297608  293537 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 18:40:26.308590  293537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 18:40:26.398728  293537 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 18:40:26.412875  293537 certs.go:68] Setting up /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880 for IP: 192.168.49.2
	I0919 18:40:26.412939  293537 certs.go:194] generating shared ca certs ...
	I0919 18:40:26.412971  293537 certs.go:226] acquiring lock for ca certs: {Name:mk523f1ff29ba1b125a662d8a16466e488af99fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:26.413155  293537 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19664-287261/.minikube/ca.key
	I0919 18:40:27.099466  293537 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-287261/.minikube/ca.crt ...
	I0919 18:40:27.099502  293537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-287261/.minikube/ca.crt: {Name:mk72ad373d845c3dfe8b530e275b045be3f9ea44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:27.099743  293537 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-287261/.minikube/ca.key ...
	I0919 18:40:27.099758  293537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-287261/.minikube/ca.key: {Name:mk6927d0aa607f1c3942a9244061e169aede669f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:27.099875  293537 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19664-287261/.minikube/proxy-client-ca.key
	I0919 18:40:27.690254  293537 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-287261/.minikube/proxy-client-ca.crt ...
	I0919 18:40:27.690284  293537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-287261/.minikube/proxy-client-ca.crt: {Name:mka95663104efa43935e2407319e69b9f1a74e15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:27.690470  293537 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-287261/.minikube/proxy-client-ca.key ...
	I0919 18:40:27.690482  293537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-287261/.minikube/proxy-client-ca.key: {Name:mk6fc29661ffdcbf98927cc74a4761e2f385ba1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:27.690561  293537 certs.go:256] generating profile certs ...
	I0919 18:40:27.690623  293537 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.key
	I0919 18:40:27.690651  293537 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.crt with IP's: []
	I0919 18:40:28.051916  293537 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.crt ...
	I0919 18:40:28.051949  293537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.crt: {Name:mke5e1b1ca475791e881a9b267a71ff7d5e349d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:28.052153  293537 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.key ...
	I0919 18:40:28.052169  293537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.key: {Name:mk22f66e5d44e53266af14f016ae74fdede1016f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:28.052261  293537 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/apiserver.key.5bcc1d6f
	I0919 18:40:28.052281  293537 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/apiserver.crt.5bcc1d6f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0919 18:40:28.439619  293537 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/apiserver.crt.5bcc1d6f ...
	I0919 18:40:28.439652  293537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/apiserver.crt.5bcc1d6f: {Name:mk5ef899798c2f7f8cf7a6ca8b6bd7730a17a415 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:28.439841  293537 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/apiserver.key.5bcc1d6f ...
	I0919 18:40:28.439855  293537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/apiserver.key.5bcc1d6f: {Name:mkeaf10cc0c4d5344f5ac3188436e53b1f1f489d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:28.439951  293537 certs.go:381] copying /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/apiserver.crt.5bcc1d6f -> /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/apiserver.crt
	I0919 18:40:28.440041  293537 certs.go:385] copying /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/apiserver.key.5bcc1d6f -> /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/apiserver.key
	I0919 18:40:28.440125  293537 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/proxy-client.key
	I0919 18:40:28.440146  293537 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/proxy-client.crt with IP's: []
	I0919 18:40:28.762615  293537 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/proxy-client.crt ...
	I0919 18:40:28.762647  293537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/proxy-client.crt: {Name:mkc47d434d3ac3df7a1893f6cdfe2041dc8c73e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:28.762858  293537 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/proxy-client.key ...
	I0919 18:40:28.762874  293537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/proxy-client.key: {Name:mk13c604db6dc59e6437e08ad373c38c986c71d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:28.763079  293537 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 18:40:28.763126  293537 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca.pem (1082 bytes)
	I0919 18:40:28.763158  293537 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/cert.pem (1123 bytes)
	I0919 18:40:28.763190  293537 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/key.pem (1675 bytes)
	I0919 18:40:28.763827  293537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 18:40:28.788710  293537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 18:40:28.813437  293537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 18:40:28.843050  293537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 18:40:28.867629  293537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0919 18:40:28.892447  293537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 18:40:28.919243  293537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 18:40:28.946630  293537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 18:40:28.971651  293537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 18:40:28.996622  293537 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 18:40:29.016914  293537 ssh_runner.go:195] Run: openssl version
	I0919 18:40:29.022790  293537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 18:40:29.032837  293537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 18:40:29.036589  293537 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0919 18:40:29.036657  293537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 18:40:29.043641  293537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 18:40:29.053700  293537 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 18:40:29.057830  293537 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 18:40:29.057902  293537 kubeadm.go:392] StartCluster: {Name:addons-971880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-971880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 18:40:29.058001  293537 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 18:40:29.058061  293537 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 18:40:29.100267  293537 cri.go:89] found id: ""
	I0919 18:40:29.100339  293537 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 18:40:29.109720  293537 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 18:40:29.118559  293537 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 18:40:29.118644  293537 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 18:40:29.127755  293537 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 18:40:29.127779  293537 kubeadm.go:157] found existing configuration files:
	
	I0919 18:40:29.127861  293537 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 18:40:29.136373  293537 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 18:40:29.136470  293537 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 18:40:29.145139  293537 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 18:40:29.154300  293537 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 18:40:29.154371  293537 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 18:40:29.162969  293537 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 18:40:29.172062  293537 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 18:40:29.172201  293537 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 18:40:29.180912  293537 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 18:40:29.189802  293537 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 18:40:29.189895  293537 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 18:40:29.198252  293537 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 18:40:29.242636  293537 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0919 18:40:29.242730  293537 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 18:40:29.263410  293537 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 18:40:29.263486  293537 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0919 18:40:29.263526  293537 kubeadm.go:310] OS: Linux
	I0919 18:40:29.263578  293537 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 18:40:29.263638  293537 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0919 18:40:29.263690  293537 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 18:40:29.263742  293537 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 18:40:29.263795  293537 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 18:40:29.263853  293537 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 18:40:29.263910  293537 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 18:40:29.263966  293537 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 18:40:29.264017  293537 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0919 18:40:29.324338  293537 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 18:40:29.324483  293537 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 18:40:29.324600  293537 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 18:40:29.332452  293537 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 18:40:29.337268  293537 out.go:235]   - Generating certificates and keys ...
	I0919 18:40:29.337370  293537 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 18:40:29.337440  293537 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 18:40:29.819408  293537 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 18:40:30.596636  293537 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 18:40:31.221718  293537 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 18:40:31.614141  293537 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 18:40:31.765095  293537 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 18:40:31.765651  293537 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-971880 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 18:40:32.058450  293537 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 18:40:32.058584  293537 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-971880 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 18:40:32.624269  293537 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 18:40:32.992299  293537 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 18:40:33.509180  293537 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 18:40:33.509495  293537 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 18:40:33.874069  293537 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 18:40:34.248453  293537 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 18:40:34.476867  293537 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 18:40:34.768121  293537 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 18:40:34.973586  293537 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 18:40:34.974364  293537 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 18:40:34.977489  293537 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 18:40:34.980287  293537 out.go:235]   - Booting up control plane ...
	I0919 18:40:34.980416  293537 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 18:40:34.980503  293537 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 18:40:34.981705  293537 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 18:40:34.992817  293537 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 18:40:35.003887  293537 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 18:40:35.004094  293537 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 18:40:35.102215  293537 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 18:40:35.102357  293537 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 18:40:37.103366  293537 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.001367465s
	I0919 18:40:37.103468  293537 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0919 18:40:43.109466  293537 kubeadm.go:310] [api-check] The API server is healthy after 6.004105102s
	I0919 18:40:43.126717  293537 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 18:40:43.141419  293537 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 18:40:43.170749  293537 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 18:40:43.170964  293537 kubeadm.go:310] [mark-control-plane] Marking the node addons-971880 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 18:40:43.182173  293537 kubeadm.go:310] [bootstrap-token] Using token: ebqgh7.vowgkmg5fzhkih57
	I0919 18:40:43.184491  293537 out.go:235]   - Configuring RBAC rules ...
	I0919 18:40:43.184636  293537 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 18:40:43.189100  293537 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 18:40:43.198269  293537 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 18:40:43.201802  293537 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 18:40:43.205374  293537 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 18:40:43.209929  293537 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 18:40:43.515171  293537 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 18:40:43.950419  293537 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 18:40:44.514706  293537 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 18:40:44.516367  293537 kubeadm.go:310] 
	I0919 18:40:44.516445  293537 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 18:40:44.516459  293537 kubeadm.go:310] 
	I0919 18:40:44.516539  293537 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 18:40:44.516549  293537 kubeadm.go:310] 
	I0919 18:40:44.516575  293537 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 18:40:44.516640  293537 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 18:40:44.516698  293537 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 18:40:44.516707  293537 kubeadm.go:310] 
	I0919 18:40:44.516764  293537 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 18:40:44.516773  293537 kubeadm.go:310] 
	I0919 18:40:44.516823  293537 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 18:40:44.516832  293537 kubeadm.go:310] 
	I0919 18:40:44.516885  293537 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 18:40:44.516972  293537 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 18:40:44.517047  293537 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 18:40:44.517059  293537 kubeadm.go:310] 
	I0919 18:40:44.517143  293537 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 18:40:44.517237  293537 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 18:40:44.517248  293537 kubeadm.go:310] 
	I0919 18:40:44.517338  293537 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ebqgh7.vowgkmg5fzhkih57 \
	I0919 18:40:44.517446  293537 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e7e5d662c08ea043dbeea6d8ddc73c887c0affcdbd05da0c73a8636c5020b2b0 \
	I0919 18:40:44.517472  293537 kubeadm.go:310] 	--control-plane 
	I0919 18:40:44.517480  293537 kubeadm.go:310] 
	I0919 18:40:44.517565  293537 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 18:40:44.517574  293537 kubeadm.go:310] 
	I0919 18:40:44.517657  293537 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ebqgh7.vowgkmg5fzhkih57 \
	I0919 18:40:44.517766  293537 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e7e5d662c08ea043dbeea6d8ddc73c887c0affcdbd05da0c73a8636c5020b2b0 
	I0919 18:40:44.521437  293537 kubeadm.go:310] W0919 18:40:29.239267    1169 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0919 18:40:44.521742  293537 kubeadm.go:310] W0919 18:40:29.240213    1169 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0919 18:40:44.521961  293537 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0919 18:40:44.522073  293537 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 18:40:44.522100  293537 cni.go:84] Creating CNI manager for ""
	I0919 18:40:44.522107  293537 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 18:40:44.524557  293537 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0919 18:40:44.526468  293537 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 18:40:44.530881  293537 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0919 18:40:44.530902  293537 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0919 18:40:44.551529  293537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 18:40:44.830646  293537 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 18:40:44.830784  293537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:44.830899  293537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-971880 minikube.k8s.io/updated_at=2024_09_19T18_40_44_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef minikube.k8s.io/name=addons-971880 minikube.k8s.io/primary=true
	I0919 18:40:44.846703  293537 ops.go:34] apiserver oom_adj: -16
	I0919 18:40:44.988855  293537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:45.489713  293537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:45.988930  293537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:46.489778  293537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:46.989447  293537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:47.489853  293537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:47.988905  293537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:48.092838  293537 kubeadm.go:1113] duration metric: took 3.262100386s to wait for elevateKubeSystemPrivileges
	I0919 18:40:48.092865  293537 kubeadm.go:394] duration metric: took 19.034985288s to StartCluster
	I0919 18:40:48.092882  293537 settings.go:142] acquiring lock: {Name:mkc6a05e17453fceabfc207d0b4cc62ec1022659 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:48.093002  293537 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19664-287261/kubeconfig
	I0919 18:40:48.093407  293537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-287261/kubeconfig: {Name:mkfb909fdfd15278a636c3045acef421204406b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:48.093611  293537 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 18:40:48.093742  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 18:40:48.093981  293537 config.go:182] Loaded profile config "addons-971880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 18:40:48.094022  293537 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0919 18:40:48.094098  293537 addons.go:69] Setting yakd=true in profile "addons-971880"
	I0919 18:40:48.094113  293537 addons.go:234] Setting addon yakd=true in "addons-971880"
	I0919 18:40:48.094135  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:48.094641  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.095216  293537 addons.go:69] Setting cloud-spanner=true in profile "addons-971880"
	I0919 18:40:48.095236  293537 addons.go:234] Setting addon cloud-spanner=true in "addons-971880"
	I0919 18:40:48.095263  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:48.095702  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.095942  293537 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-971880"
	I0919 18:40:48.095971  293537 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-971880"
	I0919 18:40:48.096001  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:48.096486  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.099515  293537 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-971880"
	I0919 18:40:48.099580  293537 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-971880"
	I0919 18:40:48.099611  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:48.100085  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.103641  293537 addons.go:69] Setting default-storageclass=true in profile "addons-971880"
	I0919 18:40:48.103682  293537 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-971880"
	I0919 18:40:48.104031  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.104497  293537 addons.go:69] Setting registry=true in profile "addons-971880"
	I0919 18:40:48.104553  293537 addons.go:234] Setting addon registry=true in "addons-971880"
	I0919 18:40:48.104649  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:48.110424  293537 addons.go:69] Setting gcp-auth=true in profile "addons-971880"
	I0919 18:40:48.110516  293537 mustload.go:65] Loading cluster: addons-971880
	I0919 18:40:48.110775  293537 config.go:182] Loaded profile config "addons-971880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 18:40:48.111137  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.113963  293537 addons.go:69] Setting storage-provisioner=true in profile "addons-971880"
	I0919 18:40:48.114039  293537 addons.go:234] Setting addon storage-provisioner=true in "addons-971880"
	I0919 18:40:48.114115  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:48.114635  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.124272  293537 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-971880"
	I0919 18:40:48.124372  293537 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-971880"
	I0919 18:40:48.125252  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.126048  293537 addons.go:69] Setting ingress=true in profile "addons-971880"
	I0919 18:40:48.126119  293537 addons.go:234] Setting addon ingress=true in "addons-971880"
	I0919 18:40:48.128419  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:48.133638  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.139430  293537 addons.go:69] Setting ingress-dns=true in profile "addons-971880"
	I0919 18:40:48.139516  293537 addons.go:234] Setting addon ingress-dns=true in "addons-971880"
	I0919 18:40:48.139599  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:48.148412  293537 addons.go:69] Setting volcano=true in profile "addons-971880"
	I0919 18:40:48.148444  293537 addons.go:234] Setting addon volcano=true in "addons-971880"
	I0919 18:40:48.148485  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:48.148978  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.154658  293537 addons.go:69] Setting inspektor-gadget=true in profile "addons-971880"
	I0919 18:40:48.155015  293537 addons.go:234] Setting addon inspektor-gadget=true in "addons-971880"
	I0919 18:40:48.155265  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:48.160373  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.168211  293537 addons.go:69] Setting volumesnapshots=true in profile "addons-971880"
	I0919 18:40:48.168262  293537 addons.go:234] Setting addon volumesnapshots=true in "addons-971880"
	I0919 18:40:48.168315  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:48.168800  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.169585  293537 addons.go:69] Setting metrics-server=true in profile "addons-971880"
	I0919 18:40:48.169648  293537 addons.go:234] Setting addon metrics-server=true in "addons-971880"
	I0919 18:40:48.169697  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:48.170227  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.188308  293537 out.go:177] * Verifying Kubernetes components...
	I0919 18:40:48.192536  293537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 18:40:48.193478  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.200152  293537 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0919 18:40:48.203808  293537 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0919 18:40:48.203875  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0919 18:40:48.203989  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:48.225477  293537 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0919 18:40:48.227964  293537 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0919 18:40:48.228044  293537 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0919 18:40:48.228159  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:48.233391  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.247643  293537 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 18:40:48.247770  293537 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0919 18:40:48.249933  293537 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 18:40:48.249954  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 18:40:48.250022  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:48.250280  293537 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0919 18:40:48.250292  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0919 18:40:48.250331  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:48.268018  293537 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-971880"
	I0919 18:40:48.268064  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:48.268669  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.301994  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:48.321185  293537 addons.go:234] Setting addon default-storageclass=true in "addons-971880"
	I0919 18:40:48.321281  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:48.321773  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.335436  293537 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0919 18:40:48.370533  293537 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0919 18:40:48.379795  293537 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0919 18:40:48.386450  293537 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0919 18:40:48.386524  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0919 18:40:48.386624  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	W0919 18:40:48.424168  293537 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0919 18:40:48.424644  293537 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0919 18:40:48.425429  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 18:40:48.436157  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:48.442440  293537 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0919 18:40:48.443423  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:48.445300  293537 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0919 18:40:48.445323  293537 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0919 18:40:48.445395  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:48.456187  293537 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0919 18:40:48.457831  293537 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0919 18:40:48.460477  293537 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0919 18:40:48.460613  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:48.461087  293537 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0919 18:40:48.468240  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:48.470505  293537 out.go:177]   - Using image docker.io/registry:2.8.3
	I0919 18:40:48.470671  293537 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0919 18:40:48.470691  293537 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0919 18:40:48.470755  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:48.471236  293537 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0919 18:40:48.471287  293537 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0919 18:40:48.471381  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:48.476564  293537 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0919 18:40:48.478508  293537 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0919 18:40:48.478550  293537 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0919 18:40:48.485990  293537 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0919 18:40:48.486221  293537 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0919 18:40:48.486238  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0919 18:40:48.486306  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:48.492362  293537 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0919 18:40:48.492383  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0919 18:40:48.492450  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:48.507563  293537 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0919 18:40:48.510093  293537 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0919 18:40:48.520263  293537 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0919 18:40:48.522642  293537 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0919 18:40:48.522662  293537 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0919 18:40:48.522728  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:48.523732  293537 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0919 18:40:48.528259  293537 out.go:177]   - Using image docker.io/busybox:stable
	I0919 18:40:48.537887  293537 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0919 18:40:48.537911  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0919 18:40:48.537972  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:48.563951  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:48.620227  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:48.621676  293537 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 18:40:48.621692  293537 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 18:40:48.621753  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:48.656254  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:48.660010  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:48.662112  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:48.680282  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:48.691828  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:48.701064  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:48.730089  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:48.847079  293537 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0919 18:40:48.847155  293537 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0919 18:40:48.899693  293537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0919 18:40:48.952737  293537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 18:40:48.983419  293537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0919 18:40:49.059022  293537 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 18:40:49.066672  293537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0919 18:40:49.071175  293537 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0919 18:40:49.071244  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0919 18:40:49.089048  293537 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0919 18:40:49.089124  293537 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0919 18:40:49.105564  293537 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0919 18:40:49.105644  293537 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0919 18:40:49.141153  293537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 18:40:49.153728  293537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0919 18:40:49.171677  293537 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0919 18:40:49.171749  293537 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0919 18:40:49.196160  293537 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0919 18:40:49.196258  293537 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0919 18:40:49.201404  293537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0919 18:40:49.300209  293537 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0919 18:40:49.300237  293537 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0919 18:40:49.307634  293537 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 18:40:49.307707  293537 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0919 18:40:49.314732  293537 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0919 18:40:49.314805  293537 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0919 18:40:49.316388  293537 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0919 18:40:49.316451  293537 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0919 18:40:49.322479  293537 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0919 18:40:49.322560  293537 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0919 18:40:49.324957  293537 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0919 18:40:49.325025  293537 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0919 18:40:49.443569  293537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 18:40:49.465909  293537 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0919 18:40:49.465986  293537 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0919 18:40:49.486828  293537 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0919 18:40:49.486903  293537 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0919 18:40:49.490513  293537 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0919 18:40:49.490583  293537 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0919 18:40:49.497348  293537 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0919 18:40:49.497417  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0919 18:40:49.499708  293537 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0919 18:40:49.499771  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0919 18:40:49.604687  293537 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0919 18:40:49.604762  293537 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0919 18:40:49.622808  293537 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0919 18:40:49.622885  293537 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0919 18:40:49.638544  293537 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0919 18:40:49.638621  293537 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0919 18:40:49.675019  293537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0919 18:40:49.677046  293537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0919 18:40:49.714011  293537 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0919 18:40:49.714092  293537 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0919 18:40:49.716817  293537 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0919 18:40:49.716895  293537 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0919 18:40:49.762646  293537 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 18:40:49.762723  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0919 18:40:49.803578  293537 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0919 18:40:49.803657  293537 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0919 18:40:49.810913  293537 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0919 18:40:49.810986  293537 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0919 18:40:49.866155  293537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 18:40:49.879116  293537 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0919 18:40:49.879177  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0919 18:40:49.879534  293537 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0919 18:40:49.879572  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0919 18:40:49.968313  293537 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0919 18:40:49.968393  293537 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0919 18:40:49.996879  293537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0919 18:40:50.013103  293537 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0919 18:40:50.013191  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0919 18:40:50.050463  293537 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0919 18:40:50.050536  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0919 18:40:50.104006  293537 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0919 18:40:50.104091  293537 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0919 18:40:50.207761  293537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0919 18:40:52.084903  293537 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.659437294s)
	I0919 18:40:52.084932  293537 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0919 18:40:52.804936  293537 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-971880" context rescaled to 1 replicas
	I0919 18:40:52.874938  293537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.975162388s)
	I0919 18:40:53.461651  293537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.508878588s)
	I0919 18:40:53.462093  293537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.478595684s)
	I0919 18:40:53.462151  293537 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.403060239s)
	I0919 18:40:53.463230  293537 node_ready.go:35] waiting up to 6m0s for node "addons-971880" to be "Ready" ...
	I0919 18:40:53.463954  293537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.39718287s)
	I0919 18:40:53.468432  293537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.327207142s)
	W0919 18:40:53.594176  293537 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0919 18:40:54.547718  293537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.393904123s)
	I0919 18:40:54.547758  293537 addons.go:475] Verifying addon ingress=true in "addons-971880"
	I0919 18:40:54.548061  293537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.346577228s)
	I0919 18:40:54.548169  293537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.104526551s)
	I0919 18:40:54.548182  293537 addons.go:475] Verifying addon metrics-server=true in "addons-971880"
	I0919 18:40:54.548246  293537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.873146657s)
	I0919 18:40:54.548327  293537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.871207532s)
	I0919 18:40:54.548340  293537 addons.go:475] Verifying addon registry=true in "addons-971880"
	I0919 18:40:54.548534  293537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.682309401s)
	W0919 18:40:54.548991  293537 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0919 18:40:54.549023  293537 retry.go:31] will retry after 205.142793ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0919 18:40:54.548602  293537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.5516414s)
	I0919 18:40:54.551437  293537 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-971880 service yakd-dashboard -n yakd-dashboard
	
	I0919 18:40:54.551461  293537 out.go:177] * Verifying ingress addon...
	I0919 18:40:54.551445  293537 out.go:177] * Verifying registry addon...
	I0919 18:40:54.557513  293537 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0919 18:40:54.557513  293537 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0919 18:40:54.596066  293537 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0919 18:40:54.596198  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:54.601815  293537 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0919 18:40:54.601882  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:54.754480  293537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 18:40:54.864433  293537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.656576474s)
	I0919 18:40:54.864471  293537 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-971880"
	I0919 18:40:54.868277  293537 out.go:177] * Verifying csi-hostpath-driver addon...
	I0919 18:40:54.871943  293537 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0919 18:40:54.892561  293537 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0919 18:40:54.892589  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:55.065376  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:55.066290  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:55.378204  293537 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0919 18:40:55.378236  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:55.467473  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:40:55.562562  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:55.564541  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:55.878298  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:56.069085  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:56.070574  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:56.378665  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:56.563025  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:56.564886  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:56.877667  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:57.063752  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:57.064417  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:57.376416  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:57.467718  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:40:57.564440  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:57.564945  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:57.876755  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:58.064027  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:58.065697  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:58.092344  293537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.337768039s)
	I0919 18:40:58.378257  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:58.403319  293537 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0919 18:40:58.403425  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:58.443191  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:58.567155  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:58.567685  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:58.572168  293537 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0919 18:40:58.592692  293537 addons.go:234] Setting addon gcp-auth=true in "addons-971880"
	I0919 18:40:58.592803  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:58.593324  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:58.612730  293537 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0919 18:40:58.612786  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:58.630178  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:58.730014  293537 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0919 18:40:58.732139  293537 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0919 18:40:58.734124  293537 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0919 18:40:58.734146  293537 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0919 18:40:58.768295  293537 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0919 18:40:58.768320  293537 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0919 18:40:58.797975  293537 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0919 18:40:58.797994  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0919 18:40:58.817821  293537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0919 18:40:58.876322  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:59.072911  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:59.074414  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:59.382599  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:59.438129  293537 addons.go:475] Verifying addon gcp-auth=true in "addons-971880"
	I0919 18:40:59.440631  293537 out.go:177] * Verifying gcp-auth addon...
	I0919 18:40:59.442860  293537 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0919 18:40:59.464789  293537 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0919 18:40:59.464814  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:59.480890  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:40:59.561671  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:59.562964  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:59.875651  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:59.946736  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:00.070465  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:00.077137  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:00.384708  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:00.448341  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:00.582774  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:00.583004  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:00.877719  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:00.948225  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:01.065344  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:01.067794  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:01.375354  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:01.448227  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:01.561780  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:01.562960  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:01.875771  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:01.945881  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:01.966831  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:41:02.062563  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:02.062799  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:02.376547  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:02.447352  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:02.561779  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:02.562611  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:02.875580  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:02.946256  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:03.061831  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:03.062387  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:03.375870  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:03.446437  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:03.560976  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:03.561891  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:03.875202  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:03.947962  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:03.967037  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:41:04.061480  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:04.062379  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:04.375941  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:04.446238  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:04.562468  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:04.562877  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:04.875285  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:04.946660  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:05.062465  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:05.062886  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:05.376001  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:05.446912  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:05.561838  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:05.562615  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:05.875421  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:05.946657  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:06.061543  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:06.062589  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:06.375196  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:06.446960  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:06.466347  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:41:06.562063  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:06.562764  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:06.875596  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:06.946898  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:07.061921  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:07.063181  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:07.375810  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:07.446026  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:07.561428  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:07.562546  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:07.875172  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:07.946505  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:08.061499  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:08.062816  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:08.376019  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:08.446272  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:08.467168  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:41:08.562612  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:08.562946  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:08.876335  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:08.946735  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:09.061910  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:09.062619  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:09.375133  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:09.447206  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:09.561421  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:09.562389  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:09.876131  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:09.946813  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:10.062507  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:10.064607  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:10.375353  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:10.446904  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:10.562895  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:10.563990  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:10.875793  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:10.946554  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:10.967028  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:41:11.061932  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:11.063150  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:11.375830  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:11.446348  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:11.561653  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:11.563206  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:11.875920  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:11.946654  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:12.061917  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:12.062460  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:12.375786  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:12.446012  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:12.562540  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:12.562908  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:12.877867  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:12.946299  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:13.060991  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:13.062119  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:13.375793  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:13.445883  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:13.466546  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:41:13.561960  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:13.562612  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:13.875666  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:13.947427  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:14.061694  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:14.062464  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:14.376511  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:14.446294  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:14.562791  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:14.563547  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:14.875418  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:14.946605  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:15.062005  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:15.063964  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:15.377135  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:15.446681  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:15.562379  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:15.562700  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:15.877033  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:15.946231  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:15.966501  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:41:16.062211  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:16.063155  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:16.376819  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:16.446617  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:16.563906  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:16.565097  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:16.876051  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:16.946253  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:17.066866  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:17.067521  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:17.376509  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:17.446316  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:17.561816  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:17.562038  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:17.875656  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:17.946877  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:17.966970  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:41:18.061223  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:18.062175  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:18.376287  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:18.446551  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:18.561931  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:18.562843  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:18.875329  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:18.947103  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:19.061452  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:19.062306  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:19.375857  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:19.446215  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:19.561838  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:19.562763  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:19.875704  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:19.946970  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:19.967322  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:41:20.061991  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:20.063004  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:20.375819  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:20.446018  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:20.561673  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:20.562416  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:20.875770  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:20.946314  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:21.061298  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:21.062118  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:21.376133  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:21.446523  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:21.561547  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:21.562572  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:21.875087  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:21.946665  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:22.061448  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:22.062440  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:22.375879  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:22.446019  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:22.467332  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:41:22.561928  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:22.562802  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:22.876174  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:22.947177  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:23.061819  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:23.062482  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:23.375560  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:23.446172  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:23.560905  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:23.562464  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:23.875348  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:23.947319  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:24.060883  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:24.062369  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:24.376201  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:24.446773  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:24.562406  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:24.563400  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:24.876177  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:24.947005  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:24.969870  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:41:25.060859  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:25.061661  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:25.375277  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:25.447052  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:25.561034  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:25.562067  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:25.875854  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:25.946102  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:26.061201  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:26.062358  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:26.376390  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:26.446809  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:26.561912  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:26.562731  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:26.875687  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:26.946990  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:27.060915  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:27.061831  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:27.375963  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:27.446755  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:27.466656  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:41:27.561455  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:27.562776  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:27.875854  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:27.946628  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:28.061527  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:28.062880  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:28.376492  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:28.446888  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:28.561880  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:28.563128  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:28.876058  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:28.947461  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:29.061576  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:29.062985  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:29.375644  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:29.446816  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:29.561107  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:29.561942  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:29.875796  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:29.945754  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:29.966769  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:41:30.062213  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:30.062963  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:30.376763  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:30.446860  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:30.561690  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:30.562513  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:30.875970  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:30.945998  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:31.062104  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:31.063092  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:31.375520  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:31.446646  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:31.561126  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:31.562097  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:31.875721  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:31.946673  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:32.061200  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:32.061808  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:32.376029  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:32.446717  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:32.466655  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:41:32.561644  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:32.562929  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:32.875835  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:32.966393  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:32.984056  293537 node_ready.go:49] node "addons-971880" has status "Ready":"True"
	I0919 18:41:32.984085  293537 node_ready.go:38] duration metric: took 39.520822677s for node "addons-971880" to be "Ready" ...
	I0919 18:41:32.984096  293537 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 18:41:33.035725  293537 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-lzshk" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:33.085442  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:33.086727  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:33.401079  293537 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0919 18:41:33.401109  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:33.449540  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:33.562204  293537 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0919 18:41:33.562272  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:33.562938  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:33.879142  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:33.979993  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:34.083770  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:34.085152  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:34.397059  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:34.486406  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:34.549855  293537 pod_ready.go:93] pod "coredns-7c65d6cfc9-lzshk" in "kube-system" namespace has status "Ready":"True"
	I0919 18:41:34.549882  293537 pod_ready.go:82] duration metric: took 1.514119286s for pod "coredns-7c65d6cfc9-lzshk" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:34.549904  293537 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-971880" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:34.557730  293537 pod_ready.go:93] pod "etcd-addons-971880" in "kube-system" namespace has status "Ready":"True"
	I0919 18:41:34.557755  293537 pod_ready.go:82] duration metric: took 7.843669ms for pod "etcd-addons-971880" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:34.558059  293537 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-971880" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:34.564913  293537 pod_ready.go:93] pod "kube-apiserver-addons-971880" in "kube-system" namespace has status "Ready":"True"
	I0919 18:41:34.564937  293537 pod_ready.go:82] duration metric: took 6.858144ms for pod "kube-apiserver-addons-971880" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:34.564948  293537 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-971880" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:34.570244  293537 pod_ready.go:93] pod "kube-controller-manager-addons-971880" in "kube-system" namespace has status "Ready":"True"
	I0919 18:41:34.570267  293537 pod_ready.go:82] duration metric: took 5.311429ms for pod "kube-controller-manager-addons-971880" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:34.570281  293537 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pf8wk" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:34.587641  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:34.589693  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:34.607709  293537 pod_ready.go:93] pod "kube-proxy-pf8wk" in "kube-system" namespace has status "Ready":"True"
	I0919 18:41:34.607736  293537 pod_ready.go:82] duration metric: took 37.446262ms for pod "kube-proxy-pf8wk" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:34.607748  293537 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-971880" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:34.883869  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:34.946929  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:34.968272  293537 pod_ready.go:93] pod "kube-scheduler-addons-971880" in "kube-system" namespace has status "Ready":"True"
	I0919 18:41:34.968298  293537 pod_ready.go:82] duration metric: took 360.543214ms for pod "kube-scheduler-addons-971880" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:34.968310  293537 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:35.066332  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:35.067047  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:35.377071  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:35.446694  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:35.563116  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:35.564514  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:35.878270  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:35.947116  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:36.062668  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:36.064093  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:36.378169  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:36.446076  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:36.562355  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:36.563611  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:36.877416  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:36.946831  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:36.976090  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:37.070127  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:37.070708  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:37.378036  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:37.449197  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:37.572857  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:37.574137  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:37.878958  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:37.947687  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:38.066635  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:38.068196  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:38.379960  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:38.447036  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:38.566180  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:38.567164  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:38.878059  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:38.948415  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:39.065678  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:39.068345  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:39.382645  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:39.446403  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:39.474388  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:39.561643  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:39.562705  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:39.876574  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:39.948622  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:40.064799  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:40.070969  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:40.378517  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:40.447109  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:40.563248  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:40.564262  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:40.878488  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:40.947935  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:41.066945  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:41.068000  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:41.377261  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:41.447055  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:41.475670  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:41.564547  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:41.565894  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:41.877348  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:41.946812  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:42.063870  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:42.065853  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:42.378089  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:42.447279  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:42.562819  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:42.564637  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:42.877041  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:42.947027  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:43.063723  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:43.066318  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:43.378706  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:43.447494  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:43.475801  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:43.562902  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:43.565584  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:43.878005  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:43.958649  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:44.063440  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:44.064670  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:44.378042  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:44.446376  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:44.566188  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:44.567897  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:44.885274  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:44.948492  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:45.083599  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:45.091434  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:45.382104  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:45.479271  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:45.481202  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:45.565683  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:45.566749  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:45.877414  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:45.947825  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:46.065683  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:46.067689  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:46.377216  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:46.447142  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:46.564574  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:46.566078  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:46.879355  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:46.976594  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:47.062108  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:47.063081  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:47.377187  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:47.446380  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:47.563241  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:47.563925  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:47.877203  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:47.946716  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:47.974788  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:48.062852  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:48.063938  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:48.379041  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:48.446882  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:48.564580  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:48.567894  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:48.877326  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:48.947262  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:49.064573  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:49.065888  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:49.377713  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:49.446539  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:49.563561  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:49.564620  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:49.876718  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:49.946923  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:49.976834  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:50.062984  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:50.063275  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:50.378977  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:50.477858  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:50.562860  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:50.563269  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:50.877622  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:50.946366  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:51.061768  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:51.062839  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:51.379398  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:51.478496  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:51.564227  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:51.564662  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:51.877074  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:51.946553  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:52.062951  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:52.063955  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:52.377899  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:52.446850  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:52.479114  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:52.563898  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:52.565470  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:52.878227  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:52.947109  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:53.066828  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:53.066812  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:53.389853  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:53.450162  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:53.568765  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:53.569814  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:53.877173  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:53.947676  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:54.062677  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:54.064006  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:54.377637  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:54.447160  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:54.563690  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:54.565109  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:54.878630  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:54.949386  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:54.976344  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:55.065055  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:55.065708  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:55.377239  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:55.447929  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:55.565896  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:55.566378  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:55.877242  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:55.946425  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:56.062875  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:56.063179  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:56.379895  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:56.447076  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:56.562740  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:56.563473  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:56.877550  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:56.947753  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:57.067682  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:57.070082  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:57.377976  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:57.447187  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:57.475294  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:57.569051  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:57.570062  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:57.877048  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:57.983847  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:58.077525  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:58.079554  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:58.380087  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:58.446658  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:58.563211  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:58.564086  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:58.877276  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:58.946236  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:59.062712  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:59.062998  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:59.377963  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:59.446451  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:59.563498  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:59.564989  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:59.876502  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:59.947957  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:59.977211  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:00.121226  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:00.133316  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:00.391137  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:00.472275  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:00.566019  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:00.569802  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:00.877649  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:00.947625  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:01.066312  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:01.068223  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:01.377479  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:01.447220  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:01.563581  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:01.566404  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:01.877374  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:01.950613  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:01.979124  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:02.084459  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:02.085060  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:02.378285  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:02.447656  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:02.564352  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:02.566754  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:02.877376  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:02.979247  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:03.078289  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:03.078836  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:03.377708  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:03.447086  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:03.561944  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:03.563509  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:03.877703  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:03.950350  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:04.062096  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:04.064223  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:04.377278  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:04.446833  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:04.475318  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:04.562383  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:04.563641  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:04.884659  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:04.989733  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:05.061214  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:05.063030  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:05.377498  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:05.447815  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:05.565160  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:05.567761  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:05.876913  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:05.950444  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:06.063189  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:06.064164  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:06.379772  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:06.446607  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:06.478893  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:06.566184  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:06.567288  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:06.879373  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:06.948136  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:07.067070  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:07.072076  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:07.377008  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:07.446697  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:07.569679  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:07.571369  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:07.880635  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:07.947236  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:08.071737  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:08.077335  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:08.378546  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:08.447335  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:08.564632  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:08.565221  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:08.877848  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:08.946934  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:08.974974  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:09.064653  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:09.065797  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:09.377975  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:09.476947  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:09.563133  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:09.564689  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:09.876749  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:09.946248  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:10.062142  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:10.063644  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:10.377813  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:10.446860  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:10.562053  293537 kapi.go:107] duration metric: took 1m16.004535153s to wait for kubernetes.io/minikube-addons=registry ...
	I0919 18:42:10.562215  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:10.876987  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:10.946342  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:11.061971  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:11.377751  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:11.447271  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:11.480810  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:11.563706  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:11.877410  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:11.947282  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:12.063287  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:12.378511  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:12.446827  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:12.563797  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:12.877171  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:12.946514  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:13.063988  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:13.379347  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:13.450900  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:13.481526  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:13.573718  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:13.878016  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:13.954643  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:14.065257  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:14.379195  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:14.447732  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:14.566057  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:14.878940  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:14.947019  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:15.066043  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:15.377698  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:15.448279  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:15.564997  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:15.876343  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:15.946958  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:15.976796  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:16.063898  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:16.377681  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:16.477282  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:16.562688  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:16.878927  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:16.946784  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:17.063387  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:17.377333  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:17.446740  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:17.563508  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:17.883389  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:17.948827  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:17.985932  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:18.064777  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:18.395701  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:18.488133  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:18.563688  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:18.880224  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:18.947351  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:19.067004  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:19.378182  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:19.446917  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:19.562840  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:19.877728  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:19.948075  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:20.064123  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:20.377853  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:20.447732  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:20.480331  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:20.565156  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:20.878939  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:20.978062  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:21.062166  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:21.378624  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:21.447261  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:21.563076  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:21.876989  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:21.946848  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:22.068830  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:22.377684  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:22.484429  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:22.485223  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:22.578263  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:22.878134  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:22.947298  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:23.065838  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:23.376555  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:23.448395  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:23.565684  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:23.877495  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:23.951222  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:24.062074  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:24.377460  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:24.485068  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:24.488152  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:24.584713  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:24.876694  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:24.946971  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:25.062114  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:25.389522  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:25.447036  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:25.562186  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:25.876882  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:25.946314  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:26.062299  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:26.378575  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:26.463928  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:26.495554  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:26.568642  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:26.878105  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:26.946857  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:27.063120  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:27.378102  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:27.447089  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:27.562236  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:27.876843  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:27.945837  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:28.063213  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:28.378654  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:28.447524  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:28.562451  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:28.878401  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:28.947457  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:28.977975  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:29.063832  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:29.377289  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:29.446975  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:29.562465  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:29.877929  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:29.946408  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:30.063568  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:30.379021  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:30.449320  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:30.565273  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:30.880376  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:30.986980  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:31.063033  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:31.377706  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:31.448205  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:31.478392  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:31.565141  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:31.877461  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:31.946903  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:32.062732  293537 kapi.go:107] duration metric: took 1m37.505219255s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0919 18:42:32.376733  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:32.448562  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:32.879931  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:32.978367  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:33.377007  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:33.452566  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:33.880325  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:33.960368  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:33.974903  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:34.376634  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:34.447204  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:34.881901  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:34.946224  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:35.377760  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:35.446114  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:35.878736  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:35.947223  293537 kapi.go:107] duration metric: took 1m36.504361675s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0919 18:42:35.949426  293537 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-971880 cluster.
	I0919 18:42:35.951872  293537 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0919 18:42:35.953970  293537 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0919 18:42:35.975189  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:36.377260  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:36.877815  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:37.377370  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:37.888310  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:37.982936  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:38.386669  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:38.877270  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:39.377530  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:39.877499  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:40.378293  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:40.475670  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:40.877392  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:41.376692  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:41.878130  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:42.378347  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:42.878515  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:42.977798  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:43.377066  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:43.878894  293537 kapi.go:107] duration metric: took 1m49.006949754s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0919 18:42:43.881074  293537 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, nvidia-device-plugin, default-storageclass, ingress-dns, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0919 18:42:43.884077  293537 addons.go:510] duration metric: took 1m55.790054032s for enable addons: enabled=[cloud-spanner storage-provisioner nvidia-device-plugin default-storageclass ingress-dns metrics-server inspektor-gadget yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0919 18:42:43.983903  293537 pod_ready.go:93] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"True"
	I0919 18:42:43.983991  293537 pod_ready.go:82] duration metric: took 1m9.015672466s for pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace to be "Ready" ...
	I0919 18:42:43.984031  293537 pod_ready.go:39] duration metric: took 1m10.999895399s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 18:42:43.984651  293537 api_server.go:52] waiting for apiserver process to appear ...
	I0919 18:42:43.984805  293537 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 18:42:43.984924  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 18:42:44.038733  293537 cri.go:89] found id: "a6739fa07ff39070e523d4cee920e861b82fa4bd2f9be9a5c18fec0b8df87a99"
	I0919 18:42:44.038756  293537 cri.go:89] found id: ""
	I0919 18:42:44.038765  293537 logs.go:276] 1 containers: [a6739fa07ff39070e523d4cee920e861b82fa4bd2f9be9a5c18fec0b8df87a99]
	I0919 18:42:44.038822  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:42:44.043249  293537 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 18:42:44.043334  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 18:42:44.088606  293537 cri.go:89] found id: "1a7797ceebe32b244c5f1a224638afd9f24c176c6cc7d597addad656dde2d48b"
	I0919 18:42:44.088631  293537 cri.go:89] found id: ""
	I0919 18:42:44.088639  293537 logs.go:276] 1 containers: [1a7797ceebe32b244c5f1a224638afd9f24c176c6cc7d597addad656dde2d48b]
	I0919 18:42:44.088700  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:42:44.092415  293537 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 18:42:44.092495  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 18:42:44.135646  293537 cri.go:89] found id: "c57cc379e1c9a8e90a13a5c1580cdc3da69b0f13cd6bd3c34135fb0ade3a0cf4"
	I0919 18:42:44.135670  293537 cri.go:89] found id: ""
	I0919 18:42:44.135678  293537 logs.go:276] 1 containers: [c57cc379e1c9a8e90a13a5c1580cdc3da69b0f13cd6bd3c34135fb0ade3a0cf4]
	I0919 18:42:44.135735  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:42:44.139218  293537 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 18:42:44.139291  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 18:42:44.179758  293537 cri.go:89] found id: "d599c639765e1953306d0695fea53619d96374405a473fbb100111dab8d56569"
	I0919 18:42:44.179782  293537 cri.go:89] found id: ""
	I0919 18:42:44.179790  293537 logs.go:276] 1 containers: [d599c639765e1953306d0695fea53619d96374405a473fbb100111dab8d56569]
	I0919 18:42:44.179856  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:42:44.184338  293537 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 18:42:44.184432  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 18:42:44.223834  293537 cri.go:89] found id: "dc4aa79f1b326b392f0966ca9b9067fb674897d417ae8cf8b7dbef4fe5de203a"
	I0919 18:42:44.223868  293537 cri.go:89] found id: ""
	I0919 18:42:44.223877  293537 logs.go:276] 1 containers: [dc4aa79f1b326b392f0966ca9b9067fb674897d417ae8cf8b7dbef4fe5de203a]
	I0919 18:42:44.223947  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:42:44.227670  293537 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 18:42:44.227745  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 18:42:44.264952  293537 cri.go:89] found id: "4e8ba4e202807048a02e5bf0c0ced036a2a964bcaf7a91f53bf6080712052341"
	I0919 18:42:44.264974  293537 cri.go:89] found id: ""
	I0919 18:42:44.264982  293537 logs.go:276] 1 containers: [4e8ba4e202807048a02e5bf0c0ced036a2a964bcaf7a91f53bf6080712052341]
	I0919 18:42:44.265042  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:42:44.268932  293537 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 18:42:44.269034  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 18:42:44.307612  293537 cri.go:89] found id: "dcda5994fb9dad3bdb5086f8926c508213459cae9deb54923f4dffd2852f1b82"
	I0919 18:42:44.307635  293537 cri.go:89] found id: ""
	I0919 18:42:44.307644  293537 logs.go:276] 1 containers: [dcda5994fb9dad3bdb5086f8926c508213459cae9deb54923f4dffd2852f1b82]
	I0919 18:42:44.307706  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:42:44.311797  293537 logs.go:123] Gathering logs for container status ...
	I0919 18:42:44.311840  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 18:42:44.363577  293537 logs.go:123] Gathering logs for kubelet ...
	I0919 18:42:44.363608  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0919 18:42:44.393941  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:32 addons-971880 kubelet[1465]: W0919 18:41:32.997203    1465 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-971880" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-971880' and this object
	W0919 18:42:44.394218  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:32 addons-971880 kubelet[1465]: E0919 18:41:32.997256    1465 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:44.394411  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.016421    1465 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-971880" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-971880' and this object
	W0919 18:42:44.394643  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.016471    1465 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:44.394822  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.016528    1465 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-971880" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-971880' and this object
	W0919 18:42:44.395044  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.016541    1465 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:44.395209  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.016587    1465 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-971880" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-971880' and this object
	W0919 18:42:44.395414  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.016607    1465 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:44.395601  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.016654    1465 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-971880" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-971880' and this object
	W0919 18:42:44.395828  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.016666    1465 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:44.396004  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.033708    1465 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-971880" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-971880' and this object
	W0919 18:42:44.396232  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.033776    1465 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:44.396400  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.078574    1465 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-971880" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-971880' and this object
	W0919 18:42:44.396607  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.078623    1465 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	I0919 18:42:44.454727  293537 logs.go:123] Gathering logs for describe nodes ...
	I0919 18:42:44.454772  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 18:42:44.643066  293537 logs.go:123] Gathering logs for etcd [1a7797ceebe32b244c5f1a224638afd9f24c176c6cc7d597addad656dde2d48b] ...
	I0919 18:42:44.643099  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a7797ceebe32b244c5f1a224638afd9f24c176c6cc7d597addad656dde2d48b"
	I0919 18:42:44.698468  293537 logs.go:123] Gathering logs for kube-scheduler [d599c639765e1953306d0695fea53619d96374405a473fbb100111dab8d56569] ...
	I0919 18:42:44.698502  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d599c639765e1953306d0695fea53619d96374405a473fbb100111dab8d56569"
	I0919 18:42:44.743288  293537 logs.go:123] Gathering logs for kube-controller-manager [4e8ba4e202807048a02e5bf0c0ced036a2a964bcaf7a91f53bf6080712052341] ...
	I0919 18:42:44.743317  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e8ba4e202807048a02e5bf0c0ced036a2a964bcaf7a91f53bf6080712052341"
	I0919 18:42:44.813056  293537 logs.go:123] Gathering logs for kindnet [dcda5994fb9dad3bdb5086f8926c508213459cae9deb54923f4dffd2852f1b82] ...
	I0919 18:42:44.813098  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcda5994fb9dad3bdb5086f8926c508213459cae9deb54923f4dffd2852f1b82"
	I0919 18:42:44.861228  293537 logs.go:123] Gathering logs for CRI-O ...
	I0919 18:42:44.861256  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 18:42:44.957892  293537 logs.go:123] Gathering logs for dmesg ...
	I0919 18:42:44.957933  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 18:42:44.974633  293537 logs.go:123] Gathering logs for kube-apiserver [a6739fa07ff39070e523d4cee920e861b82fa4bd2f9be9a5c18fec0b8df87a99] ...
	I0919 18:42:44.974662  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6739fa07ff39070e523d4cee920e861b82fa4bd2f9be9a5c18fec0b8df87a99"
	I0919 18:42:45.074514  293537 logs.go:123] Gathering logs for coredns [c57cc379e1c9a8e90a13a5c1580cdc3da69b0f13cd6bd3c34135fb0ade3a0cf4] ...
	I0919 18:42:45.075778  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c57cc379e1c9a8e90a13a5c1580cdc3da69b0f13cd6bd3c34135fb0ade3a0cf4"
	I0919 18:42:45.206965  293537 logs.go:123] Gathering logs for kube-proxy [dc4aa79f1b326b392f0966ca9b9067fb674897d417ae8cf8b7dbef4fe5de203a] ...
	I0919 18:42:45.207154  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc4aa79f1b326b392f0966ca9b9067fb674897d417ae8cf8b7dbef4fe5de203a"
	I0919 18:42:45.281778  293537 out.go:358] Setting ErrFile to fd 2...
	I0919 18:42:45.281818  293537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0919 18:42:45.281948  293537 out.go:270] X Problems detected in kubelet:
	W0919 18:42:45.281964  293537 out.go:270]   Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.016666    1465 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:45.281982  293537 out.go:270]   Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.033708    1465 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-971880" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-971880' and this object
	W0919 18:42:45.282001  293537 out.go:270]   Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.033776    1465 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:45.282028  293537 out.go:270]   Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.078574    1465 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-971880" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-971880' and this object
	W0919 18:42:45.282048  293537 out.go:270]   Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.078623    1465 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	I0919 18:42:45.282076  293537 out.go:358] Setting ErrFile to fd 2...
	I0919 18:42:45.282086  293537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:42:55.283244  293537 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 18:42:55.296761  293537 api_server.go:72] duration metric: took 2m7.20311709s to wait for apiserver process to appear ...
	I0919 18:42:55.296785  293537 api_server.go:88] waiting for apiserver healthz status ...
	I0919 18:42:55.297414  293537 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 18:42:55.297493  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 18:42:55.343738  293537 cri.go:89] found id: "a6739fa07ff39070e523d4cee920e861b82fa4bd2f9be9a5c18fec0b8df87a99"
	I0919 18:42:55.343760  293537 cri.go:89] found id: ""
	I0919 18:42:55.343768  293537 logs.go:276] 1 containers: [a6739fa07ff39070e523d4cee920e861b82fa4bd2f9be9a5c18fec0b8df87a99]
	I0919 18:42:55.343824  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:42:55.348178  293537 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 18:42:55.348259  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 18:42:55.387321  293537 cri.go:89] found id: "1a7797ceebe32b244c5f1a224638afd9f24c176c6cc7d597addad656dde2d48b"
	I0919 18:42:55.387344  293537 cri.go:89] found id: ""
	I0919 18:42:55.387352  293537 logs.go:276] 1 containers: [1a7797ceebe32b244c5f1a224638afd9f24c176c6cc7d597addad656dde2d48b]
	I0919 18:42:55.387410  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:42:55.391715  293537 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 18:42:55.391785  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 18:42:55.430903  293537 cri.go:89] found id: "c57cc379e1c9a8e90a13a5c1580cdc3da69b0f13cd6bd3c34135fb0ade3a0cf4"
	I0919 18:42:55.430932  293537 cri.go:89] found id: ""
	I0919 18:42:55.430941  293537 logs.go:276] 1 containers: [c57cc379e1c9a8e90a13a5c1580cdc3da69b0f13cd6bd3c34135fb0ade3a0cf4]
	I0919 18:42:55.431002  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:42:55.434917  293537 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 18:42:55.434994  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 18:42:55.477899  293537 cri.go:89] found id: "d599c639765e1953306d0695fea53619d96374405a473fbb100111dab8d56569"
	I0919 18:42:55.477921  293537 cri.go:89] found id: ""
	I0919 18:42:55.477929  293537 logs.go:276] 1 containers: [d599c639765e1953306d0695fea53619d96374405a473fbb100111dab8d56569]
	I0919 18:42:55.477984  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:42:55.481536  293537 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 18:42:55.481605  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 18:42:55.519995  293537 cri.go:89] found id: "dc4aa79f1b326b392f0966ca9b9067fb674897d417ae8cf8b7dbef4fe5de203a"
	I0919 18:42:55.520019  293537 cri.go:89] found id: ""
	I0919 18:42:55.520027  293537 logs.go:276] 1 containers: [dc4aa79f1b326b392f0966ca9b9067fb674897d417ae8cf8b7dbef4fe5de203a]
	I0919 18:42:55.520084  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:42:55.523730  293537 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 18:42:55.523808  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 18:42:55.563154  293537 cri.go:89] found id: "4e8ba4e202807048a02e5bf0c0ced036a2a964bcaf7a91f53bf6080712052341"
	I0919 18:42:55.563178  293537 cri.go:89] found id: ""
	I0919 18:42:55.563186  293537 logs.go:276] 1 containers: [4e8ba4e202807048a02e5bf0c0ced036a2a964bcaf7a91f53bf6080712052341]
	I0919 18:42:55.563270  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:42:55.567011  293537 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 18:42:55.567115  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 18:42:55.606868  293537 cri.go:89] found id: "dcda5994fb9dad3bdb5086f8926c508213459cae9deb54923f4dffd2852f1b82"
	I0919 18:42:55.606892  293537 cri.go:89] found id: ""
	I0919 18:42:55.606900  293537 logs.go:276] 1 containers: [dcda5994fb9dad3bdb5086f8926c508213459cae9deb54923f4dffd2852f1b82]
	I0919 18:42:55.606979  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:42:55.610547  293537 logs.go:123] Gathering logs for dmesg ...
	I0919 18:42:55.610575  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 18:42:55.626573  293537 logs.go:123] Gathering logs for kube-apiserver [a6739fa07ff39070e523d4cee920e861b82fa4bd2f9be9a5c18fec0b8df87a99] ...
	I0919 18:42:55.626606  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6739fa07ff39070e523d4cee920e861b82fa4bd2f9be9a5c18fec0b8df87a99"
	I0919 18:42:55.694807  293537 logs.go:123] Gathering logs for etcd [1a7797ceebe32b244c5f1a224638afd9f24c176c6cc7d597addad656dde2d48b] ...
	I0919 18:42:55.694847  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a7797ceebe32b244c5f1a224638afd9f24c176c6cc7d597addad656dde2d48b"
	I0919 18:42:55.746553  293537 logs.go:123] Gathering logs for coredns [c57cc379e1c9a8e90a13a5c1580cdc3da69b0f13cd6bd3c34135fb0ade3a0cf4] ...
	I0919 18:42:55.746589  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c57cc379e1c9a8e90a13a5c1580cdc3da69b0f13cd6bd3c34135fb0ade3a0cf4"
	I0919 18:42:55.790244  293537 logs.go:123] Gathering logs for kube-controller-manager [4e8ba4e202807048a02e5bf0c0ced036a2a964bcaf7a91f53bf6080712052341] ...
	I0919 18:42:55.790314  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e8ba4e202807048a02e5bf0c0ced036a2a964bcaf7a91f53bf6080712052341"
	I0919 18:42:55.858123  293537 logs.go:123] Gathering logs for kindnet [dcda5994fb9dad3bdb5086f8926c508213459cae9deb54923f4dffd2852f1b82] ...
	I0919 18:42:55.858161  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcda5994fb9dad3bdb5086f8926c508213459cae9deb54923f4dffd2852f1b82"
	I0919 18:42:55.899740  293537 logs.go:123] Gathering logs for kubelet ...
	I0919 18:42:55.899779  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0919 18:42:55.926340  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:32 addons-971880 kubelet[1465]: W0919 18:41:32.997203    1465 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-971880" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-971880' and this object
	W0919 18:42:55.926585  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:32 addons-971880 kubelet[1465]: E0919 18:41:32.997256    1465 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:55.926774  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.016421    1465 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-971880" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-971880' and this object
	W0919 18:42:55.927013  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.016471    1465 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:55.927192  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.016528    1465 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-971880" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-971880' and this object
	W0919 18:42:55.927416  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.016541    1465 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:55.927579  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.016587    1465 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-971880" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-971880' and this object
	W0919 18:42:55.927784  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.016607    1465 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:55.927976  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.016654    1465 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-971880" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-971880' and this object
	W0919 18:42:55.928213  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.016666    1465 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:55.928388  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.033708    1465 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-971880" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-971880' and this object
	W0919 18:42:55.928600  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.033776    1465 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:55.928771  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.078574    1465 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-971880" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-971880' and this object
	W0919 18:42:55.928980  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.078623    1465 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	I0919 18:42:55.987254  293537 logs.go:123] Gathering logs for describe nodes ...
	I0919 18:42:55.987289  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 18:42:56.137844  293537 logs.go:123] Gathering logs for kube-scheduler [d599c639765e1953306d0695fea53619d96374405a473fbb100111dab8d56569] ...
	I0919 18:42:56.137882  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d599c639765e1953306d0695fea53619d96374405a473fbb100111dab8d56569"
	I0919 18:42:56.191991  293537 logs.go:123] Gathering logs for kube-proxy [dc4aa79f1b326b392f0966ca9b9067fb674897d417ae8cf8b7dbef4fe5de203a] ...
	I0919 18:42:56.192025  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc4aa79f1b326b392f0966ca9b9067fb674897d417ae8cf8b7dbef4fe5de203a"
	I0919 18:42:56.234794  293537 logs.go:123] Gathering logs for CRI-O ...
	I0919 18:42:56.234827  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 18:42:56.325587  293537 logs.go:123] Gathering logs for container status ...
	I0919 18:42:56.325626  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 18:42:56.376152  293537 out.go:358] Setting ErrFile to fd 2...
	I0919 18:42:56.376180  293537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0919 18:42:56.376244  293537 out.go:270] X Problems detected in kubelet:
	W0919 18:42:56.376253  293537 out.go:270]   Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.016666    1465 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:56.376263  293537 out.go:270]   Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.033708    1465 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-971880" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-971880' and this object
	W0919 18:42:56.376271  293537 out.go:270]   Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.033776    1465 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:56.376278  293537 out.go:270]   Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.078574    1465 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-971880" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-971880' and this object
	W0919 18:42:56.376285  293537 out.go:270]   Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.078623    1465 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	I0919 18:42:56.376411  293537 out.go:358] Setting ErrFile to fd 2...
	I0919 18:42:56.376419  293537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:43:06.376913  293537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 18:43:06.385497  293537 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 18:43:06.387607  293537 api_server.go:141] control plane version: v1.31.1
	I0919 18:43:06.387660  293537 api_server.go:131] duration metric: took 11.090867395s to wait for apiserver health ...
	I0919 18:43:06.387671  293537 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 18:43:06.387696  293537 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 18:43:06.387762  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 18:43:06.425666  293537 cri.go:89] found id: "a6739fa07ff39070e523d4cee920e861b82fa4bd2f9be9a5c18fec0b8df87a99"
	I0919 18:43:06.425689  293537 cri.go:89] found id: ""
	I0919 18:43:06.425697  293537 logs.go:276] 1 containers: [a6739fa07ff39070e523d4cee920e861b82fa4bd2f9be9a5c18fec0b8df87a99]
	I0919 18:43:06.425753  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:43:06.429431  293537 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 18:43:06.429509  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 18:43:06.466851  293537 cri.go:89] found id: "1a7797ceebe32b244c5f1a224638afd9f24c176c6cc7d597addad656dde2d48b"
	I0919 18:43:06.466875  293537 cri.go:89] found id: ""
	I0919 18:43:06.466883  293537 logs.go:276] 1 containers: [1a7797ceebe32b244c5f1a224638afd9f24c176c6cc7d597addad656dde2d48b]
	I0919 18:43:06.466939  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:43:06.470472  293537 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 18:43:06.470544  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 18:43:06.509833  293537 cri.go:89] found id: "c57cc379e1c9a8e90a13a5c1580cdc3da69b0f13cd6bd3c34135fb0ade3a0cf4"
	I0919 18:43:06.509856  293537 cri.go:89] found id: ""
	I0919 18:43:06.509865  293537 logs.go:276] 1 containers: [c57cc379e1c9a8e90a13a5c1580cdc3da69b0f13cd6bd3c34135fb0ade3a0cf4]
	I0919 18:43:06.509923  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:43:06.513953  293537 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 18:43:06.514030  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 18:43:06.554749  293537 cri.go:89] found id: "d599c639765e1953306d0695fea53619d96374405a473fbb100111dab8d56569"
	I0919 18:43:06.554774  293537 cri.go:89] found id: ""
	I0919 18:43:06.554783  293537 logs.go:276] 1 containers: [d599c639765e1953306d0695fea53619d96374405a473fbb100111dab8d56569]
	I0919 18:43:06.554845  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:43:06.558418  293537 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 18:43:06.558487  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 18:43:06.597281  293537 cri.go:89] found id: "dc4aa79f1b326b392f0966ca9b9067fb674897d417ae8cf8b7dbef4fe5de203a"
	I0919 18:43:06.597304  293537 cri.go:89] found id: ""
	I0919 18:43:06.597312  293537 logs.go:276] 1 containers: [dc4aa79f1b326b392f0966ca9b9067fb674897d417ae8cf8b7dbef4fe5de203a]
	I0919 18:43:06.597390  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:43:06.600882  293537 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 18:43:06.600987  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 18:43:06.640680  293537 cri.go:89] found id: "4e8ba4e202807048a02e5bf0c0ced036a2a964bcaf7a91f53bf6080712052341"
	I0919 18:43:06.640705  293537 cri.go:89] found id: ""
	I0919 18:43:06.640713  293537 logs.go:276] 1 containers: [4e8ba4e202807048a02e5bf0c0ced036a2a964bcaf7a91f53bf6080712052341]
	I0919 18:43:06.640779  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:43:06.644382  293537 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 18:43:06.644491  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 18:43:06.696347  293537 cri.go:89] found id: "dcda5994fb9dad3bdb5086f8926c508213459cae9deb54923f4dffd2852f1b82"
	I0919 18:43:06.696373  293537 cri.go:89] found id: ""
	I0919 18:43:06.696381  293537 logs.go:276] 1 containers: [dcda5994fb9dad3bdb5086f8926c508213459cae9deb54923f4dffd2852f1b82]
	I0919 18:43:06.696436  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:43:06.700014  293537 logs.go:123] Gathering logs for dmesg ...
	I0919 18:43:06.700041  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 18:43:06.720003  293537 logs.go:123] Gathering logs for describe nodes ...
	I0919 18:43:06.720085  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 18:43:06.860572  293537 logs.go:123] Gathering logs for kube-apiserver [a6739fa07ff39070e523d4cee920e861b82fa4bd2f9be9a5c18fec0b8df87a99] ...
	I0919 18:43:06.860621  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6739fa07ff39070e523d4cee920e861b82fa4bd2f9be9a5c18fec0b8df87a99"
	I0919 18:43:06.916995  293537 logs.go:123] Gathering logs for kube-proxy [dc4aa79f1b326b392f0966ca9b9067fb674897d417ae8cf8b7dbef4fe5de203a] ...
	I0919 18:43:06.917032  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc4aa79f1b326b392f0966ca9b9067fb674897d417ae8cf8b7dbef4fe5de203a"
	I0919 18:43:06.956031  293537 logs.go:123] Gathering logs for kubelet ...
	I0919 18:43:06.956059  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0919 18:43:06.980472  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:32 addons-971880 kubelet[1465]: W0919 18:41:32.997203    1465 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-971880" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-971880' and this object
	W0919 18:43:06.980836  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:32 addons-971880 kubelet[1465]: E0919 18:41:32.997256    1465 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:43:06.981031  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.016421    1465 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-971880" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-971880' and this object
	W0919 18:43:06.981267  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.016471    1465 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:43:06.981447  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.016528    1465 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-971880" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-971880' and this object
	W0919 18:43:06.981668  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.016541    1465 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:43:06.981833  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.016587    1465 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-971880" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-971880' and this object
	W0919 18:43:06.982037  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.016607    1465 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:43:06.982224  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.016654    1465 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-971880" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-971880' and this object
	W0919 18:43:06.982461  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.016666    1465 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:43:06.982633  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.033708    1465 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-971880" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-971880' and this object
	W0919 18:43:06.982849  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.033776    1465 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:43:06.983016  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.078574    1465 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-971880" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-971880' and this object
	W0919 18:43:06.983221  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.078623    1465 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	I0919 18:43:07.042579  293537 logs.go:123] Gathering logs for etcd [1a7797ceebe32b244c5f1a224638afd9f24c176c6cc7d597addad656dde2d48b] ...
	I0919 18:43:07.042616  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a7797ceebe32b244c5f1a224638afd9f24c176c6cc7d597addad656dde2d48b"
	I0919 18:43:07.101867  293537 logs.go:123] Gathering logs for coredns [c57cc379e1c9a8e90a13a5c1580cdc3da69b0f13cd6bd3c34135fb0ade3a0cf4] ...
	I0919 18:43:07.101904  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c57cc379e1c9a8e90a13a5c1580cdc3da69b0f13cd6bd3c34135fb0ade3a0cf4"
	I0919 18:43:07.146299  293537 logs.go:123] Gathering logs for kube-scheduler [d599c639765e1953306d0695fea53619d96374405a473fbb100111dab8d56569] ...
	I0919 18:43:07.146391  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d599c639765e1953306d0695fea53619d96374405a473fbb100111dab8d56569"
	I0919 18:43:07.195506  293537 logs.go:123] Gathering logs for kube-controller-manager [4e8ba4e202807048a02e5bf0c0ced036a2a964bcaf7a91f53bf6080712052341] ...
	I0919 18:43:07.195545  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e8ba4e202807048a02e5bf0c0ced036a2a964bcaf7a91f53bf6080712052341"
	I0919 18:43:07.269552  293537 logs.go:123] Gathering logs for kindnet [dcda5994fb9dad3bdb5086f8926c508213459cae9deb54923f4dffd2852f1b82] ...
	I0919 18:43:07.269590  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcda5994fb9dad3bdb5086f8926c508213459cae9deb54923f4dffd2852f1b82"
	I0919 18:43:07.315873  293537 logs.go:123] Gathering logs for CRI-O ...
	I0919 18:43:07.315908  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 18:43:07.406127  293537 logs.go:123] Gathering logs for container status ...
	I0919 18:43:07.406168  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 18:43:07.460453  293537 out.go:358] Setting ErrFile to fd 2...
	I0919 18:43:07.460483  293537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0919 18:43:07.460563  293537 out.go:270] X Problems detected in kubelet:
	W0919 18:43:07.460581  293537 out.go:270]   Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.016666    1465 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:43:07.460590  293537 out.go:270]   Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.033708    1465 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-971880" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-971880' and this object
	W0919 18:43:07.460610  293537 out.go:270]   Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.033776    1465 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:43:07.460616  293537 out.go:270]   Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.078574    1465 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-971880" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-971880' and this object
	W0919 18:43:07.460626  293537 out.go:270]   Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.078623    1465 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	I0919 18:43:07.460633  293537 out.go:358] Setting ErrFile to fd 2...
	I0919 18:43:07.460640  293537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:43:17.474173  293537 system_pods.go:59] 18 kube-system pods found
	I0919 18:43:17.474216  293537 system_pods.go:61] "coredns-7c65d6cfc9-lzshk" [fa76a4be-7a2f-482a-bb9a-f8b9caf2eed4] Running
	I0919 18:43:17.474223  293537 system_pods.go:61] "csi-hostpath-attacher-0" [e4afe744-fcb9-4ef1-83bc-7da6426a009e] Running
	I0919 18:43:17.474228  293537 system_pods.go:61] "csi-hostpath-resizer-0" [35bf4614-c53f-4a64-ba65-c4d2585a4618] Running
	I0919 18:43:17.474254  293537 system_pods.go:61] "csi-hostpathplugin-f4lvd" [c4e2104a-24a2-4d2b-982f-90c367e0f6f5] Running
	I0919 18:43:17.474265  293537 system_pods.go:61] "etcd-addons-971880" [48b082ac-da22-4582-a616-c7fc480b4ab7] Running
	I0919 18:43:17.474269  293537 system_pods.go:61] "kindnet-k2v8g" [0e23ba7f-3c08-474e-a24d-b217d7ad4fff] Running
	I0919 18:43:17.474273  293537 system_pods.go:61] "kube-apiserver-addons-971880" [2b208d09-ed22-4147-a82a-c346c0576a72] Running
	I0919 18:43:17.474278  293537 system_pods.go:61] "kube-controller-manager-addons-971880" [ef368deb-bcae-4de9-9cc2-02cce640782e] Running
	I0919 18:43:17.474288  293537 system_pods.go:61] "kube-ingress-dns-minikube" [afb5e949-2f5b-462a-89a2-809679640b8d] Running
	I0919 18:43:17.474292  293537 system_pods.go:61] "kube-proxy-pf8wk" [3daa047c-3145-421d-b44a-a991266a805e] Running
	I0919 18:43:17.474301  293537 system_pods.go:61] "kube-scheduler-addons-971880" [147489f6-4fd9-4831-8bc8-c03b9624170f] Running
	I0919 18:43:17.474305  293537 system_pods.go:61] "metrics-server-84c5f94fbc-jrbzm" [4dcd9c96-80a7-42f2-86ca-69d052a20c31] Running
	I0919 18:43:17.474312  293537 system_pods.go:61] "nvidia-device-plugin-daemonset-6b6sb" [d2508241-1d3e-43e2-b635-ccd577d441ef] Running
	I0919 18:43:17.474316  293537 system_pods.go:61] "registry-66c9cd494c-zjfvp" [95228612-f951-44f9-ac40-a54760497790] Running
	I0919 18:43:17.474331  293537 system_pods.go:61] "registry-proxy-mn6mx" [384cadea-3e7f-4b57-8edb-f51b9f4dde24] Running
	I0919 18:43:17.474337  293537 system_pods.go:61] "snapshot-controller-56fcc65765-hqvnz" [e4649166-f708-403f-b875-0777c7dc2409] Running
	I0919 18:43:17.474340  293537 system_pods.go:61] "snapshot-controller-56fcc65765-jbx4b" [08c94f64-7807-4552-b598-624dd9ca5fad] Running
	I0919 18:43:17.474344  293537 system_pods.go:61] "storage-provisioner" [a5319758-9b7e-4434-b3bb-2abf6f5f5a05] Running
	I0919 18:43:17.474350  293537 system_pods.go:74] duration metric: took 11.086673196s to wait for pod list to return data ...
	I0919 18:43:17.474360  293537 default_sa.go:34] waiting for default service account to be created ...
	I0919 18:43:17.476991  293537 default_sa.go:45] found service account: "default"
	I0919 18:43:17.477019  293537 default_sa.go:55] duration metric: took 2.651822ms for default service account to be created ...
	I0919 18:43:17.477031  293537 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 18:43:17.487749  293537 system_pods.go:86] 18 kube-system pods found
	I0919 18:43:17.487788  293537 system_pods.go:89] "coredns-7c65d6cfc9-lzshk" [fa76a4be-7a2f-482a-bb9a-f8b9caf2eed4] Running
	I0919 18:43:17.487838  293537 system_pods.go:89] "csi-hostpath-attacher-0" [e4afe744-fcb9-4ef1-83bc-7da6426a009e] Running
	I0919 18:43:17.487852  293537 system_pods.go:89] "csi-hostpath-resizer-0" [35bf4614-c53f-4a64-ba65-c4d2585a4618] Running
	I0919 18:43:17.487857  293537 system_pods.go:89] "csi-hostpathplugin-f4lvd" [c4e2104a-24a2-4d2b-982f-90c367e0f6f5] Running
	I0919 18:43:17.487865  293537 system_pods.go:89] "etcd-addons-971880" [48b082ac-da22-4582-a616-c7fc480b4ab7] Running
	I0919 18:43:17.487875  293537 system_pods.go:89] "kindnet-k2v8g" [0e23ba7f-3c08-474e-a24d-b217d7ad4fff] Running
	I0919 18:43:17.487881  293537 system_pods.go:89] "kube-apiserver-addons-971880" [2b208d09-ed22-4147-a82a-c346c0576a72] Running
	I0919 18:43:17.487889  293537 system_pods.go:89] "kube-controller-manager-addons-971880" [ef368deb-bcae-4de9-9cc2-02cce640782e] Running
	I0919 18:43:17.487896  293537 system_pods.go:89] "kube-ingress-dns-minikube" [afb5e949-2f5b-462a-89a2-809679640b8d] Running
	I0919 18:43:17.487914  293537 system_pods.go:89] "kube-proxy-pf8wk" [3daa047c-3145-421d-b44a-a991266a805e] Running
	I0919 18:43:17.487927  293537 system_pods.go:89] "kube-scheduler-addons-971880" [147489f6-4fd9-4831-8bc8-c03b9624170f] Running
	I0919 18:43:17.487932  293537 system_pods.go:89] "metrics-server-84c5f94fbc-jrbzm" [4dcd9c96-80a7-42f2-86ca-69d052a20c31] Running
	I0919 18:43:17.487948  293537 system_pods.go:89] "nvidia-device-plugin-daemonset-6b6sb" [d2508241-1d3e-43e2-b635-ccd577d441ef] Running
	I0919 18:43:17.487959  293537 system_pods.go:89] "registry-66c9cd494c-zjfvp" [95228612-f951-44f9-ac40-a54760497790] Running
	I0919 18:43:17.487964  293537 system_pods.go:89] "registry-proxy-mn6mx" [384cadea-3e7f-4b57-8edb-f51b9f4dde24] Running
	I0919 18:43:17.487969  293537 system_pods.go:89] "snapshot-controller-56fcc65765-hqvnz" [e4649166-f708-403f-b875-0777c7dc2409] Running
	I0919 18:43:17.487975  293537 system_pods.go:89] "snapshot-controller-56fcc65765-jbx4b" [08c94f64-7807-4552-b598-624dd9ca5fad] Running
	I0919 18:43:17.487979  293537 system_pods.go:89] "storage-provisioner" [a5319758-9b7e-4434-b3bb-2abf6f5f5a05] Running
	I0919 18:43:17.487987  293537 system_pods.go:126] duration metric: took 10.951104ms to wait for k8s-apps to be running ...
	I0919 18:43:17.488020  293537 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 18:43:17.488142  293537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 18:43:17.501314  293537 system_svc.go:56] duration metric: took 13.293118ms WaitForService to wait for kubelet
	I0919 18:43:17.501349  293537 kubeadm.go:582] duration metric: took 2m29.407710689s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 18:43:17.501369  293537 node_conditions.go:102] verifying NodePressure condition ...
	I0919 18:43:17.504944  293537 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0919 18:43:17.504984  293537 node_conditions.go:123] node cpu capacity is 2
	I0919 18:43:17.504998  293537 node_conditions.go:105] duration metric: took 3.620313ms to run NodePressure ...
	I0919 18:43:17.505009  293537 start.go:241] waiting for startup goroutines ...
	I0919 18:43:17.505016  293537 start.go:246] waiting for cluster config update ...
	I0919 18:43:17.505032  293537 start.go:255] writing updated cluster config ...
	I0919 18:43:17.505333  293537 ssh_runner.go:195] Run: rm -f paused
	I0919 18:43:17.844712  293537 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0919 18:43:17.848004  293537 out.go:177] * Done! kubectl is now configured to use "addons-971880" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 19 18:52:32 addons-971880 crio[951]: time="2024-09-19 18:52:32.988068193Z" level=info msg="Deleting pod default_registry-test from CNI network \"kindnet\" (type=ptp)"
	Sep 19 18:52:33 addons-971880 crio[951]: time="2024-09-19 18:52:33.057188783Z" level=info msg="Stopped pod sandbox: 4a13d55117319e27831a305be49d358475a2acd9bdf9eb06577c14a60872003d" id=3ed25861-c1ae-4d60-89c5-34d1b464ee2a name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 19 18:52:33 addons-971880 conmon[7072]: conmon 92faf2590f234669a4ba <ninfo>: container 7082 exited with status 1
	Sep 19 18:52:33 addons-971880 crio[951]: time="2024-09-19 18:52:33.975078090Z" level=info msg="Stopping container: 7caf34ec85d4091f5871fc57eee2756a73e168b818b04afbec2da5507d28bb54 (timeout: 30s)" id=5e006d89-a892-4a76-9239-466ac594a5e9 name=/runtime.v1.RuntimeService/StopContainer
	Sep 19 18:52:34 addons-971880 crio[951]: time="2024-09-19 18:52:34.002304372Z" level=info msg="Stopping container: b4ac50ced3cb9e2e2158d6d58ac42df32239427cce31c565b6046daaf0f21cf9 (timeout: 30s)" id=ff648411-3dcc-4cb2-9ad4-0f84dce0a1d7 name=/runtime.v1.RuntimeService/StopContainer
	Sep 19 18:52:34 addons-971880 conmon[3244]: conmon 7caf34ec85d4091f5871 <ninfo>: container 3255 exited with status 2
	Sep 19 18:52:34 addons-971880 crio[951]: time="2024-09-19 18:52:34.147287749Z" level=info msg="Stopped container 7caf34ec85d4091f5871fc57eee2756a73e168b818b04afbec2da5507d28bb54: kube-system/registry-66c9cd494c-zjfvp/registry" id=5e006d89-a892-4a76-9239-466ac594a5e9 name=/runtime.v1.RuntimeService/StopContainer
	Sep 19 18:52:34 addons-971880 crio[951]: time="2024-09-19 18:52:34.148140228Z" level=info msg="Stopping pod sandbox: 4a7c6a4f8aa44efab8f19f73ec9d1da2c9410b8fcbee61957fe83a4028547dfc" id=0a07d8f5-dbfe-414a-a897-a89a7b12ef40 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 19 18:52:34 addons-971880 crio[951]: time="2024-09-19 18:52:34.148395555Z" level=info msg="Got pod network &{Name:registry-66c9cd494c-zjfvp Namespace:kube-system ID:4a7c6a4f8aa44efab8f19f73ec9d1da2c9410b8fcbee61957fe83a4028547dfc UID:95228612-f951-44f9-ac40-a54760497790 NetNS:/var/run/netns/03fd8682-2896-41ed-994d-f312ef4b857e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 19 18:52:34 addons-971880 crio[951]: time="2024-09-19 18:52:34.148536084Z" level=info msg="Deleting pod kube-system_registry-66c9cd494c-zjfvp from CNI network \"kindnet\" (type=ptp)"
	Sep 19 18:52:34 addons-971880 crio[951]: time="2024-09-19 18:52:34.169948596Z" level=info msg="Stopped container b4ac50ced3cb9e2e2158d6d58ac42df32239427cce31c565b6046daaf0f21cf9: kube-system/registry-proxy-mn6mx/registry-proxy" id=ff648411-3dcc-4cb2-9ad4-0f84dce0a1d7 name=/runtime.v1.RuntimeService/StopContainer
	Sep 19 18:52:34 addons-971880 crio[951]: time="2024-09-19 18:52:34.170816755Z" level=info msg="Stopping pod sandbox: a7de82746083306b4d26322357cbdabc359333cd14c6123895e81b2c5f19e151" id=13105e93-4795-4d6e-9e6c-1969d51df36f name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 19 18:52:34 addons-971880 crio[951]: time="2024-09-19 18:52:34.185398297Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-ZI3DBNHLBN6UTFYF - [0:0]\n:KUBE-HP-QVEJNMCUGNM2WEG2 - [0:0]\n:KUBE-HP-M4NOGRXLWCJKF5OZ - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-A KUBE-HOSTPORTS -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-nsltd_ingress-nginx_a9e27002-ba35-4377-9a70-4d68a416f3bf_0_ hostport 443\" -m tcp --dport 443 -j KUBE-HP-ZI3DBNHLBN6UTFYF\n-A KUBE-HOSTPORTS -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-nsltd_ingress-nginx_a9e27002-ba35-4377-9a70-4d68a416f3bf_0_ hostport 80\" -m tcp --dport 80 -j KUBE-HP-M4NOGRXLWCJKF5OZ\n-A KUBE-HP-M4NOGRXLWCJKF5OZ -s 10.244.0.19/32 -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-nsltd_ingress-nginx_a9e27002-ba35-4377-9a70-4d68a416f3bf_0_ hostport 80\" -j KUBE-MARK-MASQ\n-A KUBE-HP-M4NOGRXLWCJKF5OZ -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-nsltd_ingress-nginx_a9e27002-ba35-4377-9a7
0-4d68a416f3bf_0_ hostport 80\" -m tcp -j DNAT --to-destination 10.244.0.19:80\n-A KUBE-HP-ZI3DBNHLBN6UTFYF -s 10.244.0.19/32 -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-nsltd_ingress-nginx_a9e27002-ba35-4377-9a70-4d68a416f3bf_0_ hostport 443\" -j KUBE-MARK-MASQ\n-A KUBE-HP-ZI3DBNHLBN6UTFYF -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-nsltd_ingress-nginx_a9e27002-ba35-4377-9a70-4d68a416f3bf_0_ hostport 443\" -m tcp -j DNAT --to-destination 10.244.0.19:443\n-X KUBE-HP-QVEJNMCUGNM2WEG2\nCOMMIT\n"
	Sep 19 18:52:34 addons-971880 crio[951]: time="2024-09-19 18:52:34.188487960Z" level=info msg="Closing host port tcp:5000"
	Sep 19 18:52:34 addons-971880 crio[951]: time="2024-09-19 18:52:34.191325505Z" level=info msg="Host port tcp:5000 does not have an open socket"
	Sep 19 18:52:34 addons-971880 crio[951]: time="2024-09-19 18:52:34.191780938Z" level=info msg="Got pod network &{Name:registry-proxy-mn6mx Namespace:kube-system ID:a7de82746083306b4d26322357cbdabc359333cd14c6123895e81b2c5f19e151 UID:384cadea-3e7f-4b57-8edb-f51b9f4dde24 NetNS:/var/run/netns/656f4fcf-a972-4c55-b726-73248a0b15bc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 19 18:52:34 addons-971880 crio[951]: time="2024-09-19 18:52:34.191993639Z" level=info msg="Deleting pod kube-system_registry-proxy-mn6mx from CNI network \"kindnet\" (type=ptp)"
	Sep 19 18:52:34 addons-971880 crio[951]: time="2024-09-19 18:52:34.212673138Z" level=info msg="Stopped pod sandbox: 4a7c6a4f8aa44efab8f19f73ec9d1da2c9410b8fcbee61957fe83a4028547dfc" id=0a07d8f5-dbfe-414a-a897-a89a7b12ef40 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 19 18:52:34 addons-971880 crio[951]: time="2024-09-19 18:52:34.216095953Z" level=info msg="Removing container: 16f32e8b6ebbf4390dd7ce2dc80db5fe0d0dbd4b411f34195b9e3a4ebf3691ac" id=7c3c77c5-02ef-4f35-ae70-5d83e4b9a737 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 19 18:52:34 addons-971880 crio[951]: time="2024-09-19 18:52:34.257589252Z" level=info msg="Removed container 16f32e8b6ebbf4390dd7ce2dc80db5fe0d0dbd4b411f34195b9e3a4ebf3691ac: gadget/gadget-xrcg4/gadget" id=7c3c77c5-02ef-4f35-ae70-5d83e4b9a737 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 19 18:52:34 addons-971880 crio[951]: time="2024-09-19 18:52:34.262022585Z" level=info msg="Stopped pod sandbox: a7de82746083306b4d26322357cbdabc359333cd14c6123895e81b2c5f19e151" id=13105e93-4795-4d6e-9e6c-1969d51df36f name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 19 18:52:35 addons-971880 crio[951]: time="2024-09-19 18:52:35.237265534Z" level=info msg="Removing container: b4ac50ced3cb9e2e2158d6d58ac42df32239427cce31c565b6046daaf0f21cf9" id=0b44740b-c404-4469-8140-f7936db7a773 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 19 18:52:35 addons-971880 crio[951]: time="2024-09-19 18:52:35.275980578Z" level=info msg="Removed container b4ac50ced3cb9e2e2158d6d58ac42df32239427cce31c565b6046daaf0f21cf9: kube-system/registry-proxy-mn6mx/registry-proxy" id=0b44740b-c404-4469-8140-f7936db7a773 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 19 18:52:35 addons-971880 crio[951]: time="2024-09-19 18:52:35.281689619Z" level=info msg="Removing container: 7caf34ec85d4091f5871fc57eee2756a73e168b818b04afbec2da5507d28bb54" id=c7493ece-0f90-4751-a44c-511418d05119 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 19 18:52:35 addons-971880 crio[951]: time="2024-09-19 18:52:35.305782071Z" level=info msg="Removed container 7caf34ec85d4091f5871fc57eee2756a73e168b818b04afbec2da5507d28bb54: kube-system/registry-66c9cd494c-zjfvp/registry" id=c7493ece-0f90-4751-a44c-511418d05119 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	92faf2590f234       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec            3 seconds ago       Exited              gadget                     7                   f5ac3575c8fee       gadget-xrcg4
	7e2229737603a       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69                 10 minutes ago      Running             gcp-auth                   0                   01e59bcb2da91       gcp-auth-89d5ffd79-8f6t2
	d355220e4e1c0       registry.k8s.io/ingress-nginx/controller@sha256:22f9d129ae8c89a2cabbd13af3c1668944f3dd68fec186199b7024a0a2fc75b3             10 minutes ago      Running             controller                 0                   e5894e60a05af       ingress-nginx-controller-bc57996ff-nsltd
	e4ece102ee198       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98             10 minutes ago      Running             local-path-provisioner     0                   567fd5373e217       local-path-provisioner-86d989889c-s9p2l
	c62dfc6e67af7       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                              10 minutes ago      Running             yakd                       0                   bb619c50918f0       yakd-dashboard-67d98fc6b-bfrtb
	719f56df5239d       nvcr.io/nvidia/k8s-device-plugin@sha256:cdd05f9d89f0552478d46474005e86b98795ad364664f644225b99d94978e680                     10 minutes ago      Running             nvidia-device-plugin-ctr   0                   386544d95701f       nvidia-device-plugin-daemonset-6b6sb
	b02fc97f9417e       420193b27261a8d37b9fb1faeed45094cefa47e72a7538fd5a6c05e8b5ce192e                                                             10 minutes ago      Exited              patch                      2                   a95ffb65bcca5       ingress-nginx-admission-patch-t7x2p
	e418333c9f79e       gcr.io/cloud-spanner-emulator/emulator@sha256:41ec188288c7943f488600462b2b74002814e52439be82d15de33c3ee4898a58               10 minutes ago      Running             cloud-spanner-emulator     0                   2d5041a3b3e10       cloud-spanner-emulator-769b77f747-wz2j4
	dd64b887fd1c5       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:7c4c1a6ca8855c524a64983eaf590e126a669ae12df83ad65de281c9beee13d3   10 minutes ago      Exited              create                     0                   aacb6089fc3b4       ingress-nginx-admission-create-7dt4w
	0cc2661b051ab       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             10 minutes ago      Running             minikube-ingress-dns       0                   d819370df738c       kube-ingress-dns-minikube
	2211b84a8bcc0       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f        10 minutes ago      Running             metrics-server             0                   022f53b7544e5       metrics-server-84c5f94fbc-jrbzm
	645c6e1070b57       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             11 minutes ago      Running             storage-provisioner        0                   61ec5f92f3e97       storage-provisioner
	c57cc379e1c9a       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                             11 minutes ago      Running             coredns                    0                   2fb9e3187c953       coredns-7c65d6cfc9-lzshk
	dc4aa79f1b326       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d                                                             11 minutes ago      Running             kube-proxy                 0                   b43f35ceba531       kube-proxy-pf8wk
	dcda5994fb9da       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51                                                             11 minutes ago      Running             kindnet-cni                0                   874829284dbe9       kindnet-k2v8g
	4e8ba4e202807       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e                                                             11 minutes ago      Running             kube-controller-manager    0                   7ee5f4b8e79eb       kube-controller-manager-addons-971880
	d599c639765e1       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d                                                             11 minutes ago      Running             kube-scheduler             0                   a0d73f380837d       kube-scheduler-addons-971880
	a6739fa07ff39       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853                                                             11 minutes ago      Running             kube-apiserver             0                   92e7a9cf57f7c       kube-apiserver-addons-971880
	1a7797ceebe32       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                             11 minutes ago      Running             etcd                       0                   0a51e9c6a88a2       etcd-addons-971880
	
	
	==> coredns [c57cc379e1c9a8e90a13a5c1580cdc3da69b0f13cd6bd3c34135fb0ade3a0cf4] <==
	[INFO] 10.244.0.15:34202 - 56364 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000210347s
	[INFO] 10.244.0.15:58149 - 38892 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002341045s
	[INFO] 10.244.0.15:58149 - 27857 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002397331s
	[INFO] 10.244.0.15:49676 - 34537 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000107306s
	[INFO] 10.244.0.15:49676 - 13548 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000157021s
	[INFO] 10.244.0.15:57838 - 45202 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00012402s
	[INFO] 10.244.0.15:57838 - 669 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000179569s
	[INFO] 10.244.0.15:51630 - 63490 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000056164s
	[INFO] 10.244.0.15:37480 - 42395 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000051873s
	[INFO] 10.244.0.15:37480 - 26265 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000048607s
	[INFO] 10.244.0.15:51630 - 21823 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000084332s
	[INFO] 10.244.0.15:55956 - 23539 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001275642s
	[INFO] 10.244.0.15:55956 - 9713 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001339642s
	[INFO] 10.244.0.15:54413 - 50779 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000067175s
	[INFO] 10.244.0.15:54413 - 3672 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000064312s
	[INFO] 10.244.0.20:41195 - 28456 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00097449s
	[INFO] 10.244.0.20:38142 - 31604 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001120663s
	[INFO] 10.244.0.20:49823 - 61218 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000160804s
	[INFO] 10.244.0.20:46939 - 4524 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000127639s
	[INFO] 10.244.0.20:36103 - 53599 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00010249s
	[INFO] 10.244.0.20:55932 - 17378 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000129329s
	[INFO] 10.244.0.20:58542 - 47562 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002373504s
	[INFO] 10.244.0.20:41076 - 61778 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002174587s
	[INFO] 10.244.0.20:51892 - 37411 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001802296s
	[INFO] 10.244.0.20:53343 - 52840 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001659954s
	
	
	==> describe nodes <==
	Name:               addons-971880
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-971880
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=addons-971880
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_19T18_40_44_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-971880
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 18:40:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-971880
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 18:52:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 19 Sep 2024 18:51:48 +0000   Thu, 19 Sep 2024 18:40:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 19 Sep 2024 18:51:48 +0000   Thu, 19 Sep 2024 18:40:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 19 Sep 2024 18:51:48 +0000   Thu, 19 Sep 2024 18:40:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 19 Sep 2024 18:51:48 +0000   Thu, 19 Sep 2024 18:41:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-971880
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 bd7bc352662e4b16b74f8eda34921dfa
	  System UUID:                760732df-5c49-4c7a-baae-21e5ed371ca8
	  Boot ID:                    52db61fe-4049-4d60-8bc0-73f7fa38c59e
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m17s
	  default                     cloud-spanner-emulator-769b77f747-wz2j4     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gadget                      gadget-xrcg4                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gcp-auth                    gcp-auth-89d5ffd79-8f6t2                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-nsltd    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-lzshk                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     11m
	  kube-system                 etcd-addons-971880                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         11m
	  kube-system                 kindnet-k2v8g                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-addons-971880                250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-addons-971880       200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-pf8wk                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-addons-971880                100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-84c5f94fbc-jrbzm             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         11m
	  kube-system                 nvidia-device-plugin-daemonset-6b6sb        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  local-path-storage          local-path-provisioner-86d989889c-s9p2l     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-bfrtb              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node addons-971880 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node addons-971880 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node addons-971880 status is now: NodeHasSufficientPID
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  11m                kubelet          Node addons-971880 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m                kubelet          Node addons-971880 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m                kubelet          Node addons-971880 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node addons-971880 event: Registered Node addons-971880 in Controller
	  Normal   NodeReady                11m                kubelet          Node addons-971880 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep19 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014930] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.480178] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.743811] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.535974] kauditd_printk_skb: 36 callbacks suppressed
	[Sep19 17:29] hrtimer: interrupt took 7222366 ns
	[Sep19 17:52] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [1a7797ceebe32b244c5f1a224638afd9f24c176c6cc7d597addad656dde2d48b] <==
	{"level":"info","ts":"2024-09-19T18:40:38.476120Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-19T18:40:38.476422Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-19T18:40:38.476471Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-19T18:40:38.476754Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-19T18:40:38.477130Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-19T18:40:38.477659Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-19T18:40:38.477789Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-19T18:40:38.477860Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-19T18:40:38.477887Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-19T18:40:38.478195Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"warn","ts":"2024-09-19T18:40:49.175324Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"232.946086ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2024-09-19T18:40:49.175485Z","caller":"traceutil/trace.go:171","msg":"trace[830918283] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:342; }","duration":"233.225348ms","start":"2024-09-19T18:40:48.942248Z","end":"2024-09-19T18:40:49.175473Z","steps":["trace[830918283] 'range keys from in-memory index tree'  (duration: 232.868524ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T18:40:51.797488Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.274026ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kube-proxy\" ","response":"range_response_count:1 size:2878"}
	{"level":"info","ts":"2024-09-19T18:40:51.797733Z","caller":"traceutil/trace.go:171","msg":"trace[1182424122] range","detail":"{range_begin:/registry/daemonsets/kube-system/kube-proxy; range_end:; response_count:1; response_revision:370; }","duration":"109.630686ms","start":"2024-09-19T18:40:51.688089Z","end":"2024-09-19T18:40:51.797719Z","steps":["trace[1182424122] 'range keys from in-memory index tree'  (duration: 108.949628ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T18:40:51.921159Z","caller":"traceutil/trace.go:171","msg":"trace[533958715] linearizableReadLoop","detail":"{readStateIndex:381; appliedIndex:380; }","duration":"112.097711ms","start":"2024-09-19T18:40:51.809047Z","end":"2024-09-19T18:40:51.921145Z","steps":["trace[533958715] 'read index received'  (duration: 41.359334ms)","trace[533958715] 'applied index is now lower than readState.Index'  (duration: 70.737803ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-19T18:40:51.921521Z","caller":"traceutil/trace.go:171","msg":"trace[119087165] transaction","detail":"{read_only:false; response_revision:371; number_of_response:1; }","duration":"208.072803ms","start":"2024-09-19T18:40:51.713438Z","end":"2024-09-19T18:40:51.921511Z","steps":["trace[119087165] 'process raft request'  (duration: 207.578674ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T18:40:51.934289Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.615898ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-09-19T18:40:51.934428Z","caller":"traceutil/trace.go:171","msg":"trace[1898066912] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:371; }","duration":"125.3687ms","start":"2024-09-19T18:40:51.809041Z","end":"2024-09-19T18:40:51.934410Z","steps":["trace[1898066912] 'agreement among raft nodes before linearized reading'  (duration: 112.594212ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T18:40:51.947086Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.812529ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-09-19T18:40:51.947245Z","caller":"traceutil/trace.go:171","msg":"trace[273625168] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:378; }","duration":"137.981505ms","start":"2024-09-19T18:40:51.809251Z","end":"2024-09-19T18:40:51.947233Z","steps":["trace[273625168] 'agreement among raft nodes before linearized reading'  (duration: 137.773891ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T18:40:52.209425Z","caller":"traceutil/trace.go:171","msg":"trace[74668238] transaction","detail":"{read_only:false; response_revision:385; number_of_response:1; }","duration":"112.887175ms","start":"2024-09-19T18:40:52.096520Z","end":"2024-09-19T18:40:52.209407Z","steps":["trace[74668238] 'process raft request'  (duration: 103.152266ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T18:40:52.740865Z","caller":"traceutil/trace.go:171","msg":"trace[748612513] transaction","detail":"{read_only:false; response_revision:430; number_of_response:1; }","duration":"103.493878ms","start":"2024-09-19T18:40:52.637355Z","end":"2024-09-19T18:40:52.740849Z","steps":["trace[748612513] 'process raft request'  (duration: 24.762129ms)","trace[748612513] 'compare'  (duration: 78.348669ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-19T18:50:38.544916Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1551}
	{"level":"info","ts":"2024-09-19T18:50:38.573319Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1551,"took":"27.857921ms","hash":3037137597,"current-db-size-bytes":6537216,"current-db-size":"6.5 MB","current-db-size-in-use-bytes":3432448,"current-db-size-in-use":"3.4 MB"}
	{"level":"info","ts":"2024-09-19T18:50:38.573371Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3037137597,"revision":1551,"compact-revision":-1}
	
	
	==> gcp-auth [7e2229737603afbb0dacc6d3df819da59af22f172e365f53a2f81a5439c8bcc4] <==
	2024/09/19 18:42:34 GCP Auth Webhook started!
	2024/09/19 18:43:17 Ready to marshal response ...
	2024/09/19 18:43:17 Ready to write response ...
	2024/09/19 18:43:18 Ready to marshal response ...
	2024/09/19 18:43:18 Ready to write response ...
	2024/09/19 18:43:18 Ready to marshal response ...
	2024/09/19 18:43:18 Ready to write response ...
	2024/09/19 18:51:32 Ready to marshal response ...
	2024/09/19 18:51:32 Ready to write response ...
	2024/09/19 18:51:38 Ready to marshal response ...
	2024/09/19 18:51:38 Ready to write response ...
	2024/09/19 18:52:02 Ready to marshal response ...
	2024/09/19 18:52:02 Ready to write response ...
	
	
	==> kernel <==
	 18:52:35 up  2:34,  0 users,  load average: 0.49, 0.42, 0.80
	Linux addons-971880 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [dcda5994fb9dad3bdb5086f8926c508213459cae9deb54923f4dffd2852f1b82] <==
	I0919 18:50:32.713805       1 main.go:299] handling current node
	I0919 18:50:42.713717       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:50:42.713751       1 main.go:299] handling current node
	I0919 18:50:52.714034       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:50:52.714070       1 main.go:299] handling current node
	I0919 18:51:02.720233       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:51:02.720272       1 main.go:299] handling current node
	I0919 18:51:12.718770       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:51:12.718807       1 main.go:299] handling current node
	I0919 18:51:22.720875       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:51:22.720911       1 main.go:299] handling current node
	I0919 18:51:32.714649       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:51:32.714776       1 main.go:299] handling current node
	I0919 18:51:42.715067       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:51:42.715104       1 main.go:299] handling current node
	I0919 18:51:52.713999       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:51:52.714126       1 main.go:299] handling current node
	I0919 18:52:02.713903       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:52:02.714038       1 main.go:299] handling current node
	I0919 18:52:12.713660       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:52:12.713710       1 main.go:299] handling current node
	I0919 18:52:22.713958       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:52:22.713992       1 main.go:299] handling current node
	I0919 18:52:32.718503       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:52:32.718631       1 main.go:299] handling current node
	
	
	==> kube-apiserver [a6739fa07ff39070e523d4cee920e861b82fa4bd2f9be9a5c18fec0b8df87a99] <==
	W0919 18:41:54.701296       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 18:41:54.701374       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0919 18:41:54.703454       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0919 18:42:43.623274       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.9.142:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.9.142:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.9.142:443: connect: connection refused" logger="UnhandledError"
	W0919 18:42:43.623962       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 18:42:43.624132       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0919 18:42:43.683381       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0919 18:51:49.284184       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0919 18:51:51.111893       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I0919 18:52:18.872919       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 18:52:18.873058       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0919 18:52:18.898841       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 18:52:18.898895       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0919 18:52:18.939986       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 18:52:18.940228       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0919 18:52:18.978013       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 18:52:18.978127       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0919 18:52:19.011407       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 18:52:19.011546       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0919 18:52:19.978902       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0919 18:52:20.012417       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0919 18:52:20.066661       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [4e8ba4e202807048a02e5bf0c0ced036a2a964bcaf7a91f53bf6080712052341] <==
	I0919 18:52:19.024285       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/snapshot-controller-56fcc65765" duration="9.206µs"
	E0919 18:52:19.980605       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0919 18:52:20.015031       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0919 18:52:20.068645       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:52:20.874307       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:52:20.874382       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:52:20.968081       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:52:20.968144       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:52:21.368527       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:52:21.368646       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:52:22.880744       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:52:22.880785       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:52:23.911243       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:52:23.911287       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:52:24.093178       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:52:24.093221       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:52:27.122954       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:52:27.123002       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:52:27.420001       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:52:27.420042       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:52:28.785695       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:52:28.785739       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0919 18:52:33.952851       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="12.537µs"
	W0919 18:52:34.623044       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:52:34.623174       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [dc4aa79f1b326b392f0966ca9b9067fb674897d417ae8cf8b7dbef4fe5de203a] <==
	I0919 18:40:52.838027       1 server_linux.go:66] "Using iptables proxy"
	I0919 18:40:53.554142       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0919 18:40:53.554262       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 18:40:53.934955       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 18:40:53.935024       1 server_linux.go:169] "Using iptables Proxier"
	I0919 18:40:53.938053       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 18:40:53.938361       1 server.go:483] "Version info" version="v1.31.1"
	I0919 18:40:53.938589       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 18:40:53.939672       1 config.go:199] "Starting service config controller"
	I0919 18:40:53.939717       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0919 18:40:53.939750       1 config.go:105] "Starting endpoint slice config controller"
	I0919 18:40:53.939765       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0919 18:40:53.940395       1 config.go:328] "Starting node config controller"
	I0919 18:40:53.940414       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0919 18:40:54.042662       1 shared_informer.go:320] Caches are synced for node config
	I0919 18:40:54.056362       1 shared_informer.go:320] Caches are synced for service config
	I0919 18:40:54.056393       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [d599c639765e1953306d0695fea53619d96374405a473fbb100111dab8d56569] <==
	W0919 18:40:41.910847       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0919 18:40:41.910906       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:41.911000       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0919 18:40:41.911041       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:41.911123       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0919 18:40:41.911163       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:41.916391       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0919 18:40:41.916611       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:41.916751       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0919 18:40:41.916806       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:41.916886       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0919 18:40:41.916941       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:41.917018       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0919 18:40:41.917057       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:41.917131       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0919 18:40:41.917171       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:41.917272       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0919 18:40:41.917314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:41.917390       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0919 18:40:41.917429       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:41.917507       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0919 18:40:41.917859       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:41.918004       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0919 18:40:41.918067       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0919 18:40:43.005131       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 19 18:52:33 addons-971880 kubelet[1465]: I0919 18:52:33.340061    1465 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/93999425-2c5a-45e6-bcca-01428c3c6c05-gcp-creds\") on node \"addons-971880\" DevicePath \"\""
	Sep 19 18:52:33 addons-971880 kubelet[1465]: I0919 18:52:33.834020    1465 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93999425-2c5a-45e6-bcca-01428c3c6c05" path="/var/lib/kubelet/pods/93999425-2c5a-45e6-bcca-01428c3c6c05/volumes"
	Sep 19 18:52:34 addons-971880 kubelet[1465]: E0919 18:52:34.127167    1465 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726771954126608661,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:488968,},InodesUsed:&UInt64Value{Value:189,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 18:52:34 addons-971880 kubelet[1465]: E0919 18:52:34.127200    1465 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726771954126608661,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:488968,},InodesUsed:&UInt64Value{Value:189,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 18:52:34 addons-971880 kubelet[1465]: E0919 18:52:34.169224    1465 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 92faf2590f234669a4bab3a2e1e525dfd1f081bd09072db33c8886977393b31c is running failed: container process not found" containerID="92faf2590f234669a4bab3a2e1e525dfd1f081bd09072db33c8886977393b31c" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 19 18:52:34 addons-971880 kubelet[1465]: E0919 18:52:34.169343    1465 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 92faf2590f234669a4bab3a2e1e525dfd1f081bd09072db33c8886977393b31c is running failed: container process not found" containerID="92faf2590f234669a4bab3a2e1e525dfd1f081bd09072db33c8886977393b31c" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 19 18:52:34 addons-971880 kubelet[1465]: E0919 18:52:34.170326    1465 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 92faf2590f234669a4bab3a2e1e525dfd1f081bd09072db33c8886977393b31c is running failed: container process not found" containerID="92faf2590f234669a4bab3a2e1e525dfd1f081bd09072db33c8886977393b31c" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 19 18:52:34 addons-971880 kubelet[1465]: E0919 18:52:34.170407    1465 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 92faf2590f234669a4bab3a2e1e525dfd1f081bd09072db33c8886977393b31c is running failed: container process not found" containerID="92faf2590f234669a4bab3a2e1e525dfd1f081bd09072db33c8886977393b31c" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 19 18:52:34 addons-971880 kubelet[1465]: E0919 18:52:34.172273    1465 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 92faf2590f234669a4bab3a2e1e525dfd1f081bd09072db33c8886977393b31c is running failed: container process not found" containerID="92faf2590f234669a4bab3a2e1e525dfd1f081bd09072db33c8886977393b31c" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 19 18:52:34 addons-971880 kubelet[1465]: E0919 18:52:34.172539    1465 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 92faf2590f234669a4bab3a2e1e525dfd1f081bd09072db33c8886977393b31c is running failed: container process not found" containerID="92faf2590f234669a4bab3a2e1e525dfd1f081bd09072db33c8886977393b31c" cmd=["/bin/gadgettracermanager","-liveness"]
	Sep 19 18:52:34 addons-971880 kubelet[1465]: I0919 18:52:34.198650    1465 scope.go:117] "RemoveContainer" containerID="16f32e8b6ebbf4390dd7ce2dc80db5fe0d0dbd4b411f34195b9e3a4ebf3691ac"
	Sep 19 18:52:34 addons-971880 kubelet[1465]: I0919 18:52:34.198946    1465 scope.go:117] "RemoveContainer" containerID="92faf2590f234669a4bab3a2e1e525dfd1f081bd09072db33c8886977393b31c"
	Sep 19 18:52:34 addons-971880 kubelet[1465]: E0919 18:52:34.199095    1465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-xrcg4_gadget(61a624af-e1a5-423b-b133-e57dd4587edb)\"" pod="gadget/gadget-xrcg4" podUID="61a624af-e1a5-423b-b133-e57dd4587edb"
	Sep 19 18:52:34 addons-971880 kubelet[1465]: I0919 18:52:34.350216    1465 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gg6q8\" (UniqueName: \"kubernetes.io/projected/384cadea-3e7f-4b57-8edb-f51b9f4dde24-kube-api-access-gg6q8\") pod \"384cadea-3e7f-4b57-8edb-f51b9f4dde24\" (UID: \"384cadea-3e7f-4b57-8edb-f51b9f4dde24\") "
	Sep 19 18:52:34 addons-971880 kubelet[1465]: I0919 18:52:34.350268    1465 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6m2v\" (UniqueName: \"kubernetes.io/projected/95228612-f951-44f9-ac40-a54760497790-kube-api-access-q6m2v\") pod \"95228612-f951-44f9-ac40-a54760497790\" (UID: \"95228612-f951-44f9-ac40-a54760497790\") "
	Sep 19 18:52:34 addons-971880 kubelet[1465]: I0919 18:52:34.352796    1465 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/384cadea-3e7f-4b57-8edb-f51b9f4dde24-kube-api-access-gg6q8" (OuterVolumeSpecName: "kube-api-access-gg6q8") pod "384cadea-3e7f-4b57-8edb-f51b9f4dde24" (UID: "384cadea-3e7f-4b57-8edb-f51b9f4dde24"). InnerVolumeSpecName "kube-api-access-gg6q8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 19 18:52:34 addons-971880 kubelet[1465]: I0919 18:52:34.353005    1465 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95228612-f951-44f9-ac40-a54760497790-kube-api-access-q6m2v" (OuterVolumeSpecName: "kube-api-access-q6m2v") pod "95228612-f951-44f9-ac40-a54760497790" (UID: "95228612-f951-44f9-ac40-a54760497790"). InnerVolumeSpecName "kube-api-access-q6m2v". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 19 18:52:34 addons-971880 kubelet[1465]: I0919 18:52:34.451317    1465 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-gg6q8\" (UniqueName: \"kubernetes.io/projected/384cadea-3e7f-4b57-8edb-f51b9f4dde24-kube-api-access-gg6q8\") on node \"addons-971880\" DevicePath \"\""
	Sep 19 18:52:34 addons-971880 kubelet[1465]: I0919 18:52:34.451353    1465 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-q6m2v\" (UniqueName: \"kubernetes.io/projected/95228612-f951-44f9-ac40-a54760497790-kube-api-access-q6m2v\") on node \"addons-971880\" DevicePath \"\""
	Sep 19 18:52:35 addons-971880 kubelet[1465]: I0919 18:52:35.235229    1465 scope.go:117] "RemoveContainer" containerID="b4ac50ced3cb9e2e2158d6d58ac42df32239427cce31c565b6046daaf0f21cf9"
	Sep 19 18:52:35 addons-971880 kubelet[1465]: I0919 18:52:35.238379    1465 scope.go:117] "RemoveContainer" containerID="92faf2590f234669a4bab3a2e1e525dfd1f081bd09072db33c8886977393b31c"
	Sep 19 18:52:35 addons-971880 kubelet[1465]: E0919 18:52:35.238544    1465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-xrcg4_gadget(61a624af-e1a5-423b-b133-e57dd4587edb)\"" pod="gadget/gadget-xrcg4" podUID="61a624af-e1a5-423b-b133-e57dd4587edb"
	Sep 19 18:52:35 addons-971880 kubelet[1465]: I0919 18:52:35.276522    1465 scope.go:117] "RemoveContainer" containerID="7caf34ec85d4091f5871fc57eee2756a73e168b818b04afbec2da5507d28bb54"
	Sep 19 18:52:35 addons-971880 kubelet[1465]: I0919 18:52:35.833421    1465 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="384cadea-3e7f-4b57-8edb-f51b9f4dde24" path="/var/lib/kubelet/pods/384cadea-3e7f-4b57-8edb-f51b9f4dde24/volumes"
	Sep 19 18:52:35 addons-971880 kubelet[1465]: I0919 18:52:35.834629    1465 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95228612-f951-44f9-ac40-a54760497790" path="/var/lib/kubelet/pods/95228612-f951-44f9-ac40-a54760497790/volumes"
	
	
	==> storage-provisioner [645c6e1070b57c423d66af2e3d6e057cece2b42bc10fd145e4e32e7603750853] <==
	I0919 18:41:34.075595       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0919 18:41:34.089415       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0919 18:41:34.089614       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0919 18:41:34.099519       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0919 18:41:34.099789       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-971880_54ef0796-8bc7-4c84-86c5-8ebc4c91b26f!
	I0919 18:41:34.100759       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f411cc94-3279-4140-8a35-80322ca09e0a", APIVersion:"v1", ResourceVersion:"930", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-971880_54ef0796-8bc7-4c84-86c5-8ebc4c91b26f became leader
	I0919 18:41:34.201066       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-971880_54ef0796-8bc7-4c84-86c5-8ebc4c91b26f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-971880 -n addons-971880
helpers_test.go:261: (dbg) Run:  kubectl --context addons-971880 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-7dt4w ingress-nginx-admission-patch-t7x2p
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-971880 describe pod busybox ingress-nginx-admission-create-7dt4w ingress-nginx-admission-patch-t7x2p
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-971880 describe pod busybox ingress-nginx-admission-create-7dt4w ingress-nginx-admission-patch-t7x2p: exit status 1 (99.860519ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-971880/192.168.49.2
	Start Time:       Thu, 19 Sep 2024 18:43:18 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.21
	IPs:
	  IP:  10.244.0.21
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w22nf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-w22nf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m18s                   default-scheduler  Successfully assigned default/busybox to addons-971880
	  Normal   Pulling    7m40s (x4 over 9m18s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m39s (x4 over 9m18s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     7m39s (x4 over 9m18s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m28s (x6 over 9m17s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m15s (x20 over 9m17s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-7dt4w" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-t7x2p" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-971880 describe pod busybox ingress-nginx-admission-create-7dt4w ingress-nginx-admission-patch-t7x2p: exit status 1
--- FAIL: TestAddons/parallel/Registry (75.10s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (151.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-971880 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-971880 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-971880 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b821eb02-af13-48de-bb33-0104f407fa1d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b821eb02-af13-48de-bb33-0104f407fa1d] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.0042872s
I0919 18:52:56.396026  292666 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-971880 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-971880 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.881781379s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-971880 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-971880 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-971880 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-971880 addons disable ingress-dns --alsologtostderr -v=1: (1.60564233s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-971880 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-971880 addons disable ingress --alsologtostderr -v=1: (7.792331507s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-971880
helpers_test.go:235: (dbg) docker inspect addons-971880:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "656ffd17b558f4d4a2b9c0de0ee5ab8e55fb64f478d3837e93c1d9738183d057",
	        "Created": "2024-09-19T18:40:21.693648884Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 294019,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-19T18:40:21.83370316Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f8be4f9f9351784955e36c0e64d55ad19451839d9f6d0c057285eb8f9072963b",
	        "ResolvConfPath": "/var/lib/docker/containers/656ffd17b558f4d4a2b9c0de0ee5ab8e55fb64f478d3837e93c1d9738183d057/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/656ffd17b558f4d4a2b9c0de0ee5ab8e55fb64f478d3837e93c1d9738183d057/hostname",
	        "HostsPath": "/var/lib/docker/containers/656ffd17b558f4d4a2b9c0de0ee5ab8e55fb64f478d3837e93c1d9738183d057/hosts",
	        "LogPath": "/var/lib/docker/containers/656ffd17b558f4d4a2b9c0de0ee5ab8e55fb64f478d3837e93c1d9738183d057/656ffd17b558f4d4a2b9c0de0ee5ab8e55fb64f478d3837e93c1d9738183d057-json.log",
	        "Name": "/addons-971880",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-971880:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-971880",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a5a48ba09a5069601d04fdb6696ee56985bb23840413df00b3b65bd12d552997-init/diff:/var/lib/docker/overlay2/01d9e9e08c815432b8994f686c30467e8ad0d2e87cf6790233377a53c691e8f4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a5a48ba09a5069601d04fdb6696ee56985bb23840413df00b3b65bd12d552997/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a5a48ba09a5069601d04fdb6696ee56985bb23840413df00b3b65bd12d552997/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a5a48ba09a5069601d04fdb6696ee56985bb23840413df00b3b65bd12d552997/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-971880",
	                "Source": "/var/lib/docker/volumes/addons-971880/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-971880",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-971880",
	                "name.minikube.sigs.k8s.io": "addons-971880",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8401fb271cde0fae79ea1c883e095a5f34d887cc56bfc81485e9925601a92a9a",
	            "SandboxKey": "/var/run/docker/netns/8401fb271cde",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-971880": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "d62f700a78daed261ed14f4bb32a66890d0b280b5d5a72af727d194426d28141",
	                    "EndpointID": "e792600fa39aac0b873f2e9aacc195668339c4f184c5b304571be40ad512fdb9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-971880",
	                        "656ffd17b558"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-971880 -n addons-971880
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-971880 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-971880 logs -n 25: (1.464564428s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-975733   | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC |                     |
	|         | -p download-only-975733              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| delete  | -p download-only-975733              | download-only-975733   | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| start   | -o=json --download-only              | download-only-217912   | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC |                     |
	|         | -p download-only-217912              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:40 UTC |
	| delete  | -p download-only-217912              | download-only-217912   | jenkins | v1.34.0 | 19 Sep 24 18:40 UTC | 19 Sep 24 18:40 UTC |
	| delete  | -p download-only-975733              | download-only-975733   | jenkins | v1.34.0 | 19 Sep 24 18:40 UTC | 19 Sep 24 18:40 UTC |
	| delete  | -p download-only-217912              | download-only-217912   | jenkins | v1.34.0 | 19 Sep 24 18:40 UTC | 19 Sep 24 18:40 UTC |
	| start   | --download-only -p                   | download-docker-592744 | jenkins | v1.34.0 | 19 Sep 24 18:40 UTC |                     |
	|         | download-docker-592744               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p download-docker-592744            | download-docker-592744 | jenkins | v1.34.0 | 19 Sep 24 18:40 UTC | 19 Sep 24 18:40 UTC |
	| start   | --download-only -p                   | binary-mirror-388144   | jenkins | v1.34.0 | 19 Sep 24 18:40 UTC |                     |
	|         | binary-mirror-388144                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:33855               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-388144              | binary-mirror-388144   | jenkins | v1.34.0 | 19 Sep 24 18:40 UTC | 19 Sep 24 18:40 UTC |
	| addons  | enable dashboard -p                  | addons-971880          | jenkins | v1.34.0 | 19 Sep 24 18:40 UTC |                     |
	|         | addons-971880                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-971880          | jenkins | v1.34.0 | 19 Sep 24 18:40 UTC |                     |
	|         | addons-971880                        |                        |         |         |                     |                     |
	| start   | -p addons-971880 --wait=true         | addons-971880          | jenkins | v1.34.0 | 19 Sep 24 18:40 UTC | 19 Sep 24 18:43 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| addons  | addons-971880 addons                 | addons-971880          | jenkins | v1.34.0 | 19 Sep 24 18:52 UTC | 19 Sep 24 18:52 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-971880 addons                 | addons-971880          | jenkins | v1.34.0 | 19 Sep 24 18:52 UTC | 19 Sep 24 18:52 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| ip      | addons-971880 ip                     | addons-971880          | jenkins | v1.34.0 | 19 Sep 24 18:52 UTC | 19 Sep 24 18:52 UTC |
	| addons  | addons-971880 addons disable         | addons-971880          | jenkins | v1.34.0 | 19 Sep 24 18:52 UTC | 19 Sep 24 18:52 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-971880          | jenkins | v1.34.0 | 19 Sep 24 18:52 UTC | 19 Sep 24 18:52 UTC |
	|         | addons-971880                        |                        |         |         |                     |                     |
	| ssh     | addons-971880 ssh curl -s            | addons-971880          | jenkins | v1.34.0 | 19 Sep 24 18:52 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:          |                        |         |         |                     |                     |
	|         | nginx.example.com'                   |                        |         |         |                     |                     |
	| ip      | addons-971880 ip                     | addons-971880          | jenkins | v1.34.0 | 19 Sep 24 18:55 UTC | 19 Sep 24 18:55 UTC |
	| addons  | addons-971880 addons disable         | addons-971880          | jenkins | v1.34.0 | 19 Sep 24 18:55 UTC | 19 Sep 24 18:55 UTC |
	|         | ingress-dns --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-971880 addons disable         | addons-971880          | jenkins | v1.34.0 | 19 Sep 24 18:55 UTC | 19 Sep 24 18:55 UTC |
	|         | ingress --alsologtostderr -v=1       |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/19 18:40:14
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 18:40:14.795022  293537 out.go:345] Setting OutFile to fd 1 ...
	I0919 18:40:14.795209  293537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:40:14.795239  293537 out.go:358] Setting ErrFile to fd 2...
	I0919 18:40:14.795263  293537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:40:14.795520  293537 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-287261/.minikube/bin
	I0919 18:40:14.796051  293537 out.go:352] Setting JSON to false
	I0919 18:40:14.796950  293537 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8547,"bootTime":1726762668,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0919 18:40:14.797050  293537 start.go:139] virtualization:  
	I0919 18:40:14.799511  293537 out.go:177] * [addons-971880] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0919 18:40:14.802404  293537 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 18:40:14.802594  293537 notify.go:220] Checking for updates...
	I0919 18:40:14.806697  293537 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 18:40:14.809013  293537 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19664-287261/kubeconfig
	I0919 18:40:14.810889  293537 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-287261/.minikube
	I0919 18:40:14.813452  293537 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0919 18:40:14.815382  293537 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 18:40:14.817599  293537 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 18:40:14.840916  293537 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0919 18:40:14.841034  293537 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 18:40:14.895857  293537 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:54 SystemTime:2024-09-19 18:40:14.88564199 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0919 18:40:14.895981  293537 docker.go:318] overlay module found
	I0919 18:40:14.898681  293537 out.go:177] * Using the docker driver based on user configuration
	I0919 18:40:14.900591  293537 start.go:297] selected driver: docker
	I0919 18:40:14.900609  293537 start.go:901] validating driver "docker" against <nil>
	I0919 18:40:14.900622  293537 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 18:40:14.901261  293537 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 18:40:14.949650  293537 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:54 SystemTime:2024-09-19 18:40:14.940202371 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0919 18:40:14.949868  293537 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 18:40:14.950096  293537 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 18:40:14.952238  293537 out.go:177] * Using Docker driver with root privileges
	I0919 18:40:14.954169  293537 cni.go:84] Creating CNI manager for ""
	I0919 18:40:14.954244  293537 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 18:40:14.954258  293537 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 18:40:14.954352  293537 start.go:340] cluster config:
	{Name:addons-971880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-971880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 18:40:14.957664  293537 out.go:177] * Starting "addons-971880" primary control-plane node in "addons-971880" cluster
	I0919 18:40:14.959288  293537 cache.go:121] Beginning downloading kic base image for docker with crio
	I0919 18:40:14.961126  293537 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0919 18:40:14.962695  293537 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 18:40:14.962751  293537 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19664-287261/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I0919 18:40:14.962778  293537 cache.go:56] Caching tarball of preloaded images
	I0919 18:40:14.962775  293537 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0919 18:40:14.962860  293537 preload.go:172] Found /home/jenkins/minikube-integration/19664-287261/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0919 18:40:14.962870  293537 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0919 18:40:14.963218  293537 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/config.json ...
	I0919 18:40:14.963237  293537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/config.json: {Name:mkdcb27e8211740d95283674cbbbe61d3cf7cd3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:14.982197  293537 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon, skipping pull
	I0919 18:40:14.982222  293537 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in daemon, skipping load
	I0919 18:40:14.982238  293537 cache.go:194] Successfully downloaded all kic artifacts
	I0919 18:40:14.982271  293537 start.go:360] acquireMachinesLock for addons-971880: {Name:mk9a87d1a88ed96332d84a90b344d67278fbcfbb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 18:40:14.982383  293537 start.go:364] duration metric: took 90.97µs to acquireMachinesLock for "addons-971880"
	I0919 18:40:14.982415  293537 start.go:93] Provisioning new machine with config: &{Name:addons-971880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-971880 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 18:40:14.982485  293537 start.go:125] createHost starting for "" (driver="docker")
	I0919 18:40:14.985182  293537 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0919 18:40:14.985446  293537 start.go:159] libmachine.API.Create for "addons-971880" (driver="docker")
	I0919 18:40:14.985494  293537 client.go:168] LocalClient.Create starting
	I0919 18:40:14.985608  293537 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca.pem
	I0919 18:40:15.651179  293537 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/cert.pem
	I0919 18:40:16.244767  293537 cli_runner.go:164] Run: docker network inspect addons-971880 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 18:40:16.259573  293537 cli_runner.go:211] docker network inspect addons-971880 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 18:40:16.259663  293537 network_create.go:284] running [docker network inspect addons-971880] to gather additional debugging logs...
	I0919 18:40:16.259686  293537 cli_runner.go:164] Run: docker network inspect addons-971880
	W0919 18:40:16.278892  293537 cli_runner.go:211] docker network inspect addons-971880 returned with exit code 1
	I0919 18:40:16.278928  293537 network_create.go:287] error running [docker network inspect addons-971880]: docker network inspect addons-971880: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-971880 not found
	I0919 18:40:16.278941  293537 network_create.go:289] output of [docker network inspect addons-971880]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-971880 not found
	
	** /stderr **
	I0919 18:40:16.279047  293537 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 18:40:16.293226  293537 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001753420}
	I0919 18:40:16.293268  293537 network_create.go:124] attempt to create docker network addons-971880 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0919 18:40:16.293334  293537 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-971880 addons-971880
	I0919 18:40:16.363897  293537 network_create.go:108] docker network addons-971880 192.168.49.0/24 created
	I0919 18:40:16.363930  293537 kic.go:121] calculated static IP "192.168.49.2" for the "addons-971880" container
	I0919 18:40:16.364004  293537 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 18:40:16.380178  293537 cli_runner.go:164] Run: docker volume create addons-971880 --label name.minikube.sigs.k8s.io=addons-971880 --label created_by.minikube.sigs.k8s.io=true
	I0919 18:40:16.395244  293537 oci.go:103] Successfully created a docker volume addons-971880
	I0919 18:40:16.395327  293537 cli_runner.go:164] Run: docker run --rm --name addons-971880-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-971880 --entrypoint /usr/bin/test -v addons-971880:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib
	I0919 18:40:17.535557  293537 cli_runner.go:217] Completed: docker run --rm --name addons-971880-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-971880 --entrypoint /usr/bin/test -v addons-971880:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib: (1.140188795s)
	I0919 18:40:17.535586  293537 oci.go:107] Successfully prepared a docker volume addons-971880
	I0919 18:40:17.535611  293537 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 18:40:17.535632  293537 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 18:40:17.535690  293537 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19664-287261/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-971880:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 18:40:21.621921  293537 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19664-287261/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-971880:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir: (4.086185357s)
	I0919 18:40:21.621955  293537 kic.go:203] duration metric: took 4.086318543s to extract preloaded images to volume ...
	W0919 18:40:21.622102  293537 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0919 18:40:21.622210  293537 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 18:40:21.679227  293537 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-971880 --name addons-971880 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-971880 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-971880 --network addons-971880 --ip 192.168.49.2 --volume addons-971880:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4
	I0919 18:40:22.007220  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Running}}
	I0919 18:40:22.032291  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:22.055098  293537 cli_runner.go:164] Run: docker exec addons-971880 stat /var/lib/dpkg/alternatives/iptables
	I0919 18:40:22.125415  293537 oci.go:144] the created container "addons-971880" has a running status.
	I0919 18:40:22.125445  293537 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa...
	I0919 18:40:22.576988  293537 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 18:40:22.615973  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:22.638224  293537 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 18:40:22.638243  293537 kic_runner.go:114] Args: [docker exec --privileged addons-971880 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 18:40:22.722473  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:22.742554  293537 machine.go:93] provisionDockerMachine start ...
	I0919 18:40:22.743352  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:22.774687  293537 main.go:141] libmachine: Using SSH client type: native
	I0919 18:40:22.774949  293537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0919 18:40:22.774959  293537 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 18:40:22.948505  293537 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-971880
	
	I0919 18:40:22.948580  293537 ubuntu.go:169] provisioning hostname "addons-971880"
	I0919 18:40:22.948677  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:22.969896  293537 main.go:141] libmachine: Using SSH client type: native
	I0919 18:40:22.970140  293537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0919 18:40:22.970160  293537 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-971880 && echo "addons-971880" | sudo tee /etc/hostname
	I0919 18:40:23.142085  293537 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-971880
	
	I0919 18:40:23.142233  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:23.173045  293537 main.go:141] libmachine: Using SSH client type: native
	I0919 18:40:23.173282  293537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0919 18:40:23.173299  293537 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-971880' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-971880/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-971880' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 18:40:23.320150  293537 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 18:40:23.320184  293537 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19664-287261/.minikube CaCertPath:/home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19664-287261/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19664-287261/.minikube}
	I0919 18:40:23.320208  293537 ubuntu.go:177] setting up certificates
	I0919 18:40:23.320217  293537 provision.go:84] configureAuth start
	I0919 18:40:23.320288  293537 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-971880
	I0919 18:40:23.336724  293537 provision.go:143] copyHostCerts
	I0919 18:40:23.336810  293537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19664-287261/.minikube/ca.pem (1082 bytes)
	I0919 18:40:23.336932  293537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19664-287261/.minikube/cert.pem (1123 bytes)
	I0919 18:40:23.337048  293537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19664-287261/.minikube/key.pem (1675 bytes)
	I0919 18:40:23.337107  293537 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19664-287261/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca-key.pem org=jenkins.addons-971880 san=[127.0.0.1 192.168.49.2 addons-971880 localhost minikube]
	I0919 18:40:23.784639  293537 provision.go:177] copyRemoteCerts
	I0919 18:40:23.784720  293537 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 18:40:23.784763  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:23.802489  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:23.909246  293537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 18:40:23.934171  293537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 18:40:23.958543  293537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 18:40:23.982904  293537 provision.go:87] duration metric: took 662.664687ms to configureAuth
	I0919 18:40:23.982931  293537 ubuntu.go:193] setting minikube options for container-runtime
	I0919 18:40:23.983122  293537 config.go:182] Loaded profile config "addons-971880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 18:40:23.983236  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:24.012307  293537 main.go:141] libmachine: Using SSH client type: native
	I0919 18:40:24.012571  293537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0919 18:40:24.012592  293537 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 18:40:24.296885  293537 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 18:40:24.296910  293537 machine.go:96] duration metric: took 1.554333983s to provisionDockerMachine
	I0919 18:40:24.296921  293537 client.go:171] duration metric: took 9.31141665s to LocalClient.Create
	I0919 18:40:24.296935  293537 start.go:167] duration metric: took 9.311489709s to libmachine.API.Create "addons-971880"
	I0919 18:40:24.296951  293537 start.go:293] postStartSetup for "addons-971880" (driver="docker")
	I0919 18:40:24.296965  293537 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 18:40:24.297040  293537 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 18:40:24.297084  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:24.314189  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:24.421363  293537 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 18:40:24.424465  293537 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 18:40:24.424502  293537 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 18:40:24.424514  293537 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 18:40:24.424521  293537 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 18:40:24.424532  293537 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-287261/.minikube/addons for local assets ...
	I0919 18:40:24.424607  293537 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-287261/.minikube/files for local assets ...
	I0919 18:40:24.424637  293537 start.go:296] duration metric: took 127.676808ms for postStartSetup
	I0919 18:40:24.424947  293537 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-971880
	I0919 18:40:24.441276  293537 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/config.json ...
	I0919 18:40:24.441573  293537 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 18:40:24.441628  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:24.457539  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:24.557015  293537 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 18:40:24.561316  293537 start.go:128] duration metric: took 9.578811258s to createHost
	I0919 18:40:24.561341  293537 start.go:83] releasing machines lock for "addons-971880", held for 9.578944592s
	I0919 18:40:24.561411  293537 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-971880
	I0919 18:40:24.576931  293537 ssh_runner.go:195] Run: cat /version.json
	I0919 18:40:24.576990  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:24.576994  293537 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 18:40:24.577069  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:24.594043  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:24.600367  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:24.825657  293537 ssh_runner.go:195] Run: systemctl --version
	I0919 18:40:24.829981  293537 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 18:40:24.973384  293537 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 18:40:24.977678  293537 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 18:40:24.998966  293537 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0919 18:40:24.999140  293537 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 18:40:25.045694  293537 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 18:40:25.045717  293537 start.go:495] detecting cgroup driver to use...
	I0919 18:40:25.045766  293537 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0919 18:40:25.045818  293537 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 18:40:25.065419  293537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 18:40:25.077859  293537 docker.go:217] disabling cri-docker service (if available) ...
	I0919 18:40:25.077968  293537 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 18:40:25.094706  293537 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 18:40:25.112860  293537 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 18:40:25.209683  293537 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 18:40:25.302151  293537 docker.go:233] disabling docker service ...
	I0919 18:40:25.302273  293537 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 18:40:25.323334  293537 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 18:40:25.336378  293537 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 18:40:25.429738  293537 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 18:40:25.535609  293537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 18:40:25.547524  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 18:40:25.564274  293537 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0919 18:40:25.564345  293537 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:40:25.574971  293537 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 18:40:25.575106  293537 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:40:25.586035  293537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:40:25.596962  293537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:40:25.607358  293537 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 18:40:25.617457  293537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:40:25.627519  293537 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:40:25.643763  293537 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:40:25.653582  293537 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 18:40:25.662617  293537 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 18:40:25.671391  293537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 18:40:25.758584  293537 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 18:40:25.881679  293537 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 18:40:25.881797  293537 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 18:40:25.885692  293537 start.go:563] Will wait 60s for crictl version
	I0919 18:40:25.885756  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:40:25.889290  293537 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 18:40:25.931872  293537 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0919 18:40:25.932001  293537 ssh_runner.go:195] Run: crio --version
	I0919 18:40:25.972764  293537 ssh_runner.go:195] Run: crio --version
	I0919 18:40:26.020911  293537 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0919 18:40:26.023368  293537 cli_runner.go:164] Run: docker network inspect addons-971880 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 18:40:26.039908  293537 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 18:40:26.044177  293537 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 18:40:26.057328  293537 kubeadm.go:883] updating cluster {Name:addons-971880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-971880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 18:40:26.057469  293537 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 18:40:26.057534  293537 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 18:40:26.133555  293537 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 18:40:26.133583  293537 crio.go:433] Images already preloaded, skipping extraction
	I0919 18:40:26.133643  293537 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 18:40:26.173236  293537 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 18:40:26.173261  293537 cache_images.go:84] Images are preloaded, skipping loading
	I0919 18:40:26.173270  293537 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0919 18:40:26.173424  293537 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-971880 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-971880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 18:40:26.173545  293537 ssh_runner.go:195] Run: crio config
	I0919 18:40:26.220780  293537 cni.go:84] Creating CNI manager for ""
	I0919 18:40:26.220804  293537 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 18:40:26.220815  293537 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 18:40:26.220841  293537 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-971880 NodeName:addons-971880 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 18:40:26.220981  293537 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-971880"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 18:40:26.221063  293537 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0919 18:40:26.230055  293537 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 18:40:26.230128  293537 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 18:40:26.239075  293537 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0919 18:40:26.257194  293537 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 18:40:26.275405  293537 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0919 18:40:26.294207  293537 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0919 18:40:26.297608  293537 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 18:40:26.308590  293537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 18:40:26.398728  293537 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 18:40:26.412875  293537 certs.go:68] Setting up /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880 for IP: 192.168.49.2
	I0919 18:40:26.412939  293537 certs.go:194] generating shared ca certs ...
	I0919 18:40:26.412971  293537 certs.go:226] acquiring lock for ca certs: {Name:mk523f1ff29ba1b125a662d8a16466e488af99fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:26.413155  293537 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19664-287261/.minikube/ca.key
	I0919 18:40:27.099466  293537 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-287261/.minikube/ca.crt ...
	I0919 18:40:27.099502  293537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-287261/.minikube/ca.crt: {Name:mk72ad373d845c3dfe8b530e275b045be3f9ea44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:27.099743  293537 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-287261/.minikube/ca.key ...
	I0919 18:40:27.099758  293537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-287261/.minikube/ca.key: {Name:mk6927d0aa607f1c3942a9244061e169aede669f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:27.099875  293537 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19664-287261/.minikube/proxy-client-ca.key
	I0919 18:40:27.690254  293537 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-287261/.minikube/proxy-client-ca.crt ...
	I0919 18:40:27.690284  293537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-287261/.minikube/proxy-client-ca.crt: {Name:mka95663104efa43935e2407319e69b9f1a74e15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:27.690470  293537 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-287261/.minikube/proxy-client-ca.key ...
	I0919 18:40:27.690482  293537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-287261/.minikube/proxy-client-ca.key: {Name:mk6fc29661ffdcbf98927cc74a4761e2f385ba1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:27.690561  293537 certs.go:256] generating profile certs ...
	I0919 18:40:27.690623  293537 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.key
	I0919 18:40:27.690651  293537 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.crt with IP's: []
	I0919 18:40:28.051916  293537 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.crt ...
	I0919 18:40:28.051949  293537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.crt: {Name:mke5e1b1ca475791e881a9b267a71ff7d5e349d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:28.052153  293537 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.key ...
	I0919 18:40:28.052169  293537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.key: {Name:mk22f66e5d44e53266af14f016ae74fdede1016f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:28.052261  293537 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/apiserver.key.5bcc1d6f
	I0919 18:40:28.052281  293537 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/apiserver.crt.5bcc1d6f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0919 18:40:28.439619  293537 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/apiserver.crt.5bcc1d6f ...
	I0919 18:40:28.439652  293537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/apiserver.crt.5bcc1d6f: {Name:mk5ef899798c2f7f8cf7a6ca8b6bd7730a17a415 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:28.439841  293537 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/apiserver.key.5bcc1d6f ...
	I0919 18:40:28.439855  293537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/apiserver.key.5bcc1d6f: {Name:mkeaf10cc0c4d5344f5ac3188436e53b1f1f489d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:28.439951  293537 certs.go:381] copying /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/apiserver.crt.5bcc1d6f -> /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/apiserver.crt
	I0919 18:40:28.440041  293537 certs.go:385] copying /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/apiserver.key.5bcc1d6f -> /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/apiserver.key
	I0919 18:40:28.440125  293537 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/proxy-client.key
	I0919 18:40:28.440146  293537 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/proxy-client.crt with IP's: []
	I0919 18:40:28.762615  293537 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/proxy-client.crt ...
	I0919 18:40:28.762647  293537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/proxy-client.crt: {Name:mkc47d434d3ac3df7a1893f6cdfe2041dc8c73e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:28.762858  293537 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/proxy-client.key ...
	I0919 18:40:28.762874  293537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/proxy-client.key: {Name:mk13c604db6dc59e6437e08ad373c38c986c71d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:28.763079  293537 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 18:40:28.763126  293537 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca.pem (1082 bytes)
	I0919 18:40:28.763158  293537 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/cert.pem (1123 bytes)
	I0919 18:40:28.763190  293537 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/key.pem (1675 bytes)
	I0919 18:40:28.763827  293537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 18:40:28.788710  293537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 18:40:28.813437  293537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 18:40:28.843050  293537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 18:40:28.867629  293537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0919 18:40:28.892447  293537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 18:40:28.919243  293537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 18:40:28.946630  293537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 18:40:28.971651  293537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 18:40:28.996622  293537 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 18:40:29.016914  293537 ssh_runner.go:195] Run: openssl version
	I0919 18:40:29.022790  293537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 18:40:29.032837  293537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 18:40:29.036589  293537 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0919 18:40:29.036657  293537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 18:40:29.043641  293537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 18:40:29.053700  293537 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 18:40:29.057830  293537 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 18:40:29.057902  293537 kubeadm.go:392] StartCluster: {Name:addons-971880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-971880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 18:40:29.058001  293537 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 18:40:29.058061  293537 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 18:40:29.100267  293537 cri.go:89] found id: ""
	I0919 18:40:29.100339  293537 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 18:40:29.109720  293537 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 18:40:29.118559  293537 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 18:40:29.118644  293537 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 18:40:29.127755  293537 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 18:40:29.127779  293537 kubeadm.go:157] found existing configuration files:
	
	I0919 18:40:29.127861  293537 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 18:40:29.136373  293537 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 18:40:29.136470  293537 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 18:40:29.145139  293537 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 18:40:29.154300  293537 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 18:40:29.154371  293537 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 18:40:29.162969  293537 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 18:40:29.172062  293537 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 18:40:29.172201  293537 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 18:40:29.180912  293537 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 18:40:29.189802  293537 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 18:40:29.189895  293537 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 18:40:29.198252  293537 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 18:40:29.242636  293537 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0919 18:40:29.242730  293537 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 18:40:29.263410  293537 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 18:40:29.263486  293537 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0919 18:40:29.263526  293537 kubeadm.go:310] OS: Linux
	I0919 18:40:29.263578  293537 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 18:40:29.263638  293537 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0919 18:40:29.263690  293537 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 18:40:29.263742  293537 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 18:40:29.263795  293537 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 18:40:29.263853  293537 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 18:40:29.263910  293537 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 18:40:29.263966  293537 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 18:40:29.264017  293537 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0919 18:40:29.324338  293537 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 18:40:29.324483  293537 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 18:40:29.324600  293537 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 18:40:29.332452  293537 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 18:40:29.337268  293537 out.go:235]   - Generating certificates and keys ...
	I0919 18:40:29.337370  293537 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 18:40:29.337440  293537 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 18:40:29.819408  293537 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 18:40:30.596636  293537 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 18:40:31.221718  293537 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 18:40:31.614141  293537 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 18:40:31.765095  293537 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 18:40:31.765651  293537 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-971880 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 18:40:32.058450  293537 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 18:40:32.058584  293537 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-971880 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 18:40:32.624269  293537 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 18:40:32.992299  293537 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 18:40:33.509180  293537 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 18:40:33.509495  293537 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 18:40:33.874069  293537 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 18:40:34.248453  293537 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 18:40:34.476867  293537 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 18:40:34.768121  293537 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 18:40:34.973586  293537 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 18:40:34.974364  293537 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 18:40:34.977489  293537 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 18:40:34.980287  293537 out.go:235]   - Booting up control plane ...
	I0919 18:40:34.980416  293537 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 18:40:34.980503  293537 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 18:40:34.981705  293537 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 18:40:34.992817  293537 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 18:40:35.003887  293537 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 18:40:35.004094  293537 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 18:40:35.102215  293537 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 18:40:35.102357  293537 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 18:40:37.103366  293537 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.001367465s
	I0919 18:40:37.103468  293537 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0919 18:40:43.109466  293537 kubeadm.go:310] [api-check] The API server is healthy after 6.004105102s
	I0919 18:40:43.126717  293537 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 18:40:43.141419  293537 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 18:40:43.170749  293537 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 18:40:43.170964  293537 kubeadm.go:310] [mark-control-plane] Marking the node addons-971880 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 18:40:43.182173  293537 kubeadm.go:310] [bootstrap-token] Using token: ebqgh7.vowgkmg5fzhkih57
	I0919 18:40:43.184491  293537 out.go:235]   - Configuring RBAC rules ...
	I0919 18:40:43.184636  293537 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 18:40:43.189100  293537 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 18:40:43.198269  293537 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 18:40:43.201802  293537 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 18:40:43.205374  293537 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 18:40:43.209929  293537 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 18:40:43.515171  293537 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 18:40:43.950419  293537 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 18:40:44.514706  293537 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 18:40:44.516367  293537 kubeadm.go:310] 
	I0919 18:40:44.516445  293537 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 18:40:44.516459  293537 kubeadm.go:310] 
	I0919 18:40:44.516539  293537 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 18:40:44.516549  293537 kubeadm.go:310] 
	I0919 18:40:44.516575  293537 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 18:40:44.516640  293537 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 18:40:44.516698  293537 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 18:40:44.516707  293537 kubeadm.go:310] 
	I0919 18:40:44.516764  293537 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 18:40:44.516773  293537 kubeadm.go:310] 
	I0919 18:40:44.516823  293537 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 18:40:44.516832  293537 kubeadm.go:310] 
	I0919 18:40:44.516885  293537 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 18:40:44.516972  293537 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 18:40:44.517047  293537 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 18:40:44.517059  293537 kubeadm.go:310] 
	I0919 18:40:44.517143  293537 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 18:40:44.517237  293537 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 18:40:44.517248  293537 kubeadm.go:310] 
	I0919 18:40:44.517338  293537 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ebqgh7.vowgkmg5fzhkih57 \
	I0919 18:40:44.517446  293537 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e7e5d662c08ea043dbeea6d8ddc73c887c0affcdbd05da0c73a8636c5020b2b0 \
	I0919 18:40:44.517472  293537 kubeadm.go:310] 	--control-plane 
	I0919 18:40:44.517480  293537 kubeadm.go:310] 
	I0919 18:40:44.517565  293537 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 18:40:44.517574  293537 kubeadm.go:310] 
	I0919 18:40:44.517657  293537 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ebqgh7.vowgkmg5fzhkih57 \
	I0919 18:40:44.517766  293537 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e7e5d662c08ea043dbeea6d8ddc73c887c0affcdbd05da0c73a8636c5020b2b0 
	I0919 18:40:44.521437  293537 kubeadm.go:310] W0919 18:40:29.239267    1169 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0919 18:40:44.521742  293537 kubeadm.go:310] W0919 18:40:29.240213    1169 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0919 18:40:44.521961  293537 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0919 18:40:44.522073  293537 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 18:40:44.522100  293537 cni.go:84] Creating CNI manager for ""
	I0919 18:40:44.522107  293537 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 18:40:44.524557  293537 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0919 18:40:44.526468  293537 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 18:40:44.530881  293537 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0919 18:40:44.530902  293537 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0919 18:40:44.551529  293537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 18:40:44.830646  293537 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 18:40:44.830784  293537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:44.830899  293537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-971880 minikube.k8s.io/updated_at=2024_09_19T18_40_44_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef minikube.k8s.io/name=addons-971880 minikube.k8s.io/primary=true
	I0919 18:40:44.846703  293537 ops.go:34] apiserver oom_adj: -16
	I0919 18:40:44.988855  293537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:45.489713  293537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:45.988930  293537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:46.489778  293537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:46.989447  293537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:47.489853  293537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:47.988905  293537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:48.092838  293537 kubeadm.go:1113] duration metric: took 3.262100386s to wait for elevateKubeSystemPrivileges
	I0919 18:40:48.092865  293537 kubeadm.go:394] duration metric: took 19.034985288s to StartCluster
	I0919 18:40:48.092882  293537 settings.go:142] acquiring lock: {Name:mkc6a05e17453fceabfc207d0b4cc62ec1022659 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:48.093002  293537 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19664-287261/kubeconfig
	I0919 18:40:48.093407  293537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-287261/kubeconfig: {Name:mkfb909fdfd15278a636c3045acef421204406b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:48.093611  293537 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 18:40:48.093742  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 18:40:48.093981  293537 config.go:182] Loaded profile config "addons-971880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 18:40:48.094022  293537 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0919 18:40:48.094098  293537 addons.go:69] Setting yakd=true in profile "addons-971880"
	I0919 18:40:48.094113  293537 addons.go:234] Setting addon yakd=true in "addons-971880"
	I0919 18:40:48.094135  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:48.094641  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.095216  293537 addons.go:69] Setting cloud-spanner=true in profile "addons-971880"
	I0919 18:40:48.095236  293537 addons.go:234] Setting addon cloud-spanner=true in "addons-971880"
	I0919 18:40:48.095263  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:48.095702  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.095942  293537 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-971880"
	I0919 18:40:48.095971  293537 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-971880"
	I0919 18:40:48.096001  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:48.096486  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.099515  293537 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-971880"
	I0919 18:40:48.099580  293537 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-971880"
	I0919 18:40:48.099611  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:48.100085  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.103641  293537 addons.go:69] Setting default-storageclass=true in profile "addons-971880"
	I0919 18:40:48.103682  293537 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-971880"
	I0919 18:40:48.104031  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.104497  293537 addons.go:69] Setting registry=true in profile "addons-971880"
	I0919 18:40:48.104553  293537 addons.go:234] Setting addon registry=true in "addons-971880"
	I0919 18:40:48.104649  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:48.110424  293537 addons.go:69] Setting gcp-auth=true in profile "addons-971880"
	I0919 18:40:48.110516  293537 mustload.go:65] Loading cluster: addons-971880
	I0919 18:40:48.110775  293537 config.go:182] Loaded profile config "addons-971880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 18:40:48.111137  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.113963  293537 addons.go:69] Setting storage-provisioner=true in profile "addons-971880"
	I0919 18:40:48.114039  293537 addons.go:234] Setting addon storage-provisioner=true in "addons-971880"
	I0919 18:40:48.114115  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:48.114635  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.124272  293537 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-971880"
	I0919 18:40:48.124372  293537 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-971880"
	I0919 18:40:48.125252  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.126048  293537 addons.go:69] Setting ingress=true in profile "addons-971880"
	I0919 18:40:48.126119  293537 addons.go:234] Setting addon ingress=true in "addons-971880"
	I0919 18:40:48.128419  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:48.133638  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.139430  293537 addons.go:69] Setting ingress-dns=true in profile "addons-971880"
	I0919 18:40:48.139516  293537 addons.go:234] Setting addon ingress-dns=true in "addons-971880"
	I0919 18:40:48.139599  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:48.148412  293537 addons.go:69] Setting volcano=true in profile "addons-971880"
	I0919 18:40:48.148444  293537 addons.go:234] Setting addon volcano=true in "addons-971880"
	I0919 18:40:48.148485  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:48.148978  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.154658  293537 addons.go:69] Setting inspektor-gadget=true in profile "addons-971880"
	I0919 18:40:48.155015  293537 addons.go:234] Setting addon inspektor-gadget=true in "addons-971880"
	I0919 18:40:48.155265  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:48.160373  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.168211  293537 addons.go:69] Setting volumesnapshots=true in profile "addons-971880"
	I0919 18:40:48.168262  293537 addons.go:234] Setting addon volumesnapshots=true in "addons-971880"
	I0919 18:40:48.168315  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:48.168800  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.169585  293537 addons.go:69] Setting metrics-server=true in profile "addons-971880"
	I0919 18:40:48.169648  293537 addons.go:234] Setting addon metrics-server=true in "addons-971880"
	I0919 18:40:48.169697  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:48.170227  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.188308  293537 out.go:177] * Verifying Kubernetes components...
	I0919 18:40:48.192536  293537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 18:40:48.193478  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.200152  293537 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0919 18:40:48.203808  293537 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0919 18:40:48.203875  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0919 18:40:48.203989  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:48.225477  293537 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0919 18:40:48.227964  293537 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0919 18:40:48.228044  293537 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0919 18:40:48.228159  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:48.233391  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.247643  293537 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 18:40:48.247770  293537 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0919 18:40:48.249933  293537 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 18:40:48.249954  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 18:40:48.250022  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:48.250280  293537 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0919 18:40:48.250292  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0919 18:40:48.250331  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:48.268018  293537 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-971880"
	I0919 18:40:48.268064  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:48.268669  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.301994  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:48.321185  293537 addons.go:234] Setting addon default-storageclass=true in "addons-971880"
	I0919 18:40:48.321281  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:48.321773  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.335436  293537 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0919 18:40:48.370533  293537 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0919 18:40:48.379795  293537 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0919 18:40:48.386450  293537 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0919 18:40:48.386524  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0919 18:40:48.386624  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	W0919 18:40:48.424168  293537 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0919 18:40:48.424644  293537 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0919 18:40:48.425429  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 18:40:48.436157  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:48.442440  293537 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0919 18:40:48.443423  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:48.445300  293537 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0919 18:40:48.445323  293537 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0919 18:40:48.445395  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:48.456187  293537 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0919 18:40:48.457831  293537 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0919 18:40:48.460477  293537 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0919 18:40:48.460613  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:48.461087  293537 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0919 18:40:48.468240  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:48.470505  293537 out.go:177]   - Using image docker.io/registry:2.8.3
	I0919 18:40:48.470671  293537 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0919 18:40:48.470691  293537 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0919 18:40:48.470755  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:48.471236  293537 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0919 18:40:48.471287  293537 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0919 18:40:48.471381  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:48.476564  293537 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0919 18:40:48.478508  293537 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0919 18:40:48.478550  293537 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0919 18:40:48.485990  293537 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0919 18:40:48.486221  293537 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0919 18:40:48.486238  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0919 18:40:48.486306  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:48.492362  293537 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0919 18:40:48.492383  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0919 18:40:48.492450  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:48.507563  293537 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0919 18:40:48.510093  293537 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0919 18:40:48.520263  293537 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0919 18:40:48.522642  293537 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0919 18:40:48.522662  293537 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0919 18:40:48.522728  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:48.523732  293537 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0919 18:40:48.528259  293537 out.go:177]   - Using image docker.io/busybox:stable
	I0919 18:40:48.537887  293537 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0919 18:40:48.537911  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0919 18:40:48.537972  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:48.563951  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:48.620227  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:48.621676  293537 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 18:40:48.621692  293537 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 18:40:48.621753  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:48.656254  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:48.660010  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:48.662112  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:48.680282  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:48.691828  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:48.701064  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:48.730089  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:48.847079  293537 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0919 18:40:48.847155  293537 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0919 18:40:48.899693  293537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0919 18:40:48.952737  293537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 18:40:48.983419  293537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0919 18:40:49.059022  293537 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 18:40:49.066672  293537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0919 18:40:49.071175  293537 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0919 18:40:49.071244  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0919 18:40:49.089048  293537 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0919 18:40:49.089124  293537 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0919 18:40:49.105564  293537 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0919 18:40:49.105644  293537 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0919 18:40:49.141153  293537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 18:40:49.153728  293537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0919 18:40:49.171677  293537 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0919 18:40:49.171749  293537 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0919 18:40:49.196160  293537 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0919 18:40:49.196258  293537 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0919 18:40:49.201404  293537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0919 18:40:49.300209  293537 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0919 18:40:49.300237  293537 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0919 18:40:49.307634  293537 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 18:40:49.307707  293537 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0919 18:40:49.314732  293537 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0919 18:40:49.314805  293537 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0919 18:40:49.316388  293537 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0919 18:40:49.316451  293537 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0919 18:40:49.322479  293537 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0919 18:40:49.322560  293537 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0919 18:40:49.324957  293537 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0919 18:40:49.325025  293537 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0919 18:40:49.443569  293537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 18:40:49.465909  293537 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0919 18:40:49.465986  293537 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0919 18:40:49.486828  293537 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0919 18:40:49.486903  293537 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0919 18:40:49.490513  293537 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0919 18:40:49.490583  293537 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0919 18:40:49.497348  293537 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0919 18:40:49.497417  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0919 18:40:49.499708  293537 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0919 18:40:49.499771  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0919 18:40:49.604687  293537 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0919 18:40:49.604762  293537 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0919 18:40:49.622808  293537 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0919 18:40:49.622885  293537 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0919 18:40:49.638544  293537 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0919 18:40:49.638621  293537 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0919 18:40:49.675019  293537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0919 18:40:49.677046  293537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0919 18:40:49.714011  293537 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0919 18:40:49.714092  293537 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0919 18:40:49.716817  293537 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0919 18:40:49.716895  293537 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0919 18:40:49.762646  293537 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 18:40:49.762723  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0919 18:40:49.803578  293537 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0919 18:40:49.803657  293537 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0919 18:40:49.810913  293537 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0919 18:40:49.810986  293537 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0919 18:40:49.866155  293537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 18:40:49.879116  293537 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0919 18:40:49.879177  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0919 18:40:49.879534  293537 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0919 18:40:49.879572  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0919 18:40:49.968313  293537 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0919 18:40:49.968393  293537 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0919 18:40:49.996879  293537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0919 18:40:50.013103  293537 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0919 18:40:50.013191  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0919 18:40:50.050463  293537 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0919 18:40:50.050536  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0919 18:40:50.104006  293537 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0919 18:40:50.104091  293537 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0919 18:40:50.207761  293537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0919 18:40:52.084903  293537 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.659437294s)
	I0919 18:40:52.084932  293537 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0919 18:40:52.804936  293537 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-971880" context rescaled to 1 replicas
	I0919 18:40:52.874938  293537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.975162388s)
	I0919 18:40:53.461651  293537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.508878588s)
	I0919 18:40:53.462093  293537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.478595684s)
	I0919 18:40:53.462151  293537 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.403060239s)
	I0919 18:40:53.463230  293537 node_ready.go:35] waiting up to 6m0s for node "addons-971880" to be "Ready" ...
	I0919 18:40:53.463954  293537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.39718287s)
	I0919 18:40:53.468432  293537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.327207142s)
	W0919 18:40:53.594176  293537 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0919 18:40:54.547718  293537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.393904123s)
	I0919 18:40:54.547758  293537 addons.go:475] Verifying addon ingress=true in "addons-971880"
	I0919 18:40:54.548061  293537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.346577228s)
	I0919 18:40:54.548169  293537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.104526551s)
	I0919 18:40:54.548182  293537 addons.go:475] Verifying addon metrics-server=true in "addons-971880"
	I0919 18:40:54.548246  293537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.873146657s)
	I0919 18:40:54.548327  293537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.871207532s)
	I0919 18:40:54.548340  293537 addons.go:475] Verifying addon registry=true in "addons-971880"
	I0919 18:40:54.548534  293537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.682309401s)
	W0919 18:40:54.548991  293537 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0919 18:40:54.549023  293537 retry.go:31] will retry after 205.142793ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0919 18:40:54.548602  293537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.5516414s)
	I0919 18:40:54.551437  293537 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-971880 service yakd-dashboard -n yakd-dashboard
	
	I0919 18:40:54.551461  293537 out.go:177] * Verifying ingress addon...
	I0919 18:40:54.551445  293537 out.go:177] * Verifying registry addon...
	I0919 18:40:54.557513  293537 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0919 18:40:54.557513  293537 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0919 18:40:54.596066  293537 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0919 18:40:54.596198  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:54.601815  293537 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0919 18:40:54.601882  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:54.754480  293537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 18:40:54.864433  293537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.656576474s)
	I0919 18:40:54.864471  293537 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-971880"
	I0919 18:40:54.868277  293537 out.go:177] * Verifying csi-hostpath-driver addon...
	I0919 18:40:54.871943  293537 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0919 18:40:54.892561  293537 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0919 18:40:54.892589  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:55.065376  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:55.066290  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:55.378204  293537 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0919 18:40:55.378236  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:55.467473  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:40:55.562562  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:55.564541  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:55.878298  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:56.069085  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:56.070574  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:56.378665  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:56.563025  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:56.564886  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:56.877667  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:57.063752  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:57.064417  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:57.376416  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:57.467718  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:40:57.564440  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:57.564945  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:57.876755  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:58.064027  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:58.065697  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:58.092344  293537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.337768039s)
	I0919 18:40:58.378257  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:58.403319  293537 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0919 18:40:58.403425  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:58.443191  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:58.567155  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:58.567685  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:58.572168  293537 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0919 18:40:58.592692  293537 addons.go:234] Setting addon gcp-auth=true in "addons-971880"
	I0919 18:40:58.592803  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:58.593324  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:58.612730  293537 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0919 18:40:58.612786  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:58.630178  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:58.730014  293537 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0919 18:40:58.732139  293537 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0919 18:40:58.734124  293537 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0919 18:40:58.734146  293537 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0919 18:40:58.768295  293537 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0919 18:40:58.768320  293537 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0919 18:40:58.797975  293537 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0919 18:40:58.797994  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0919 18:40:58.817821  293537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0919 18:40:58.876322  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:59.072911  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:59.074414  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:59.382599  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:59.438129  293537 addons.go:475] Verifying addon gcp-auth=true in "addons-971880"
	I0919 18:40:59.440631  293537 out.go:177] * Verifying gcp-auth addon...
	I0919 18:40:59.442860  293537 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0919 18:40:59.464789  293537 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0919 18:40:59.464814  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:59.480890  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:40:59.561671  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:59.562964  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:59.875651  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:59.946736  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:00.070465  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:00.077137  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:00.384708  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:00.448341  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:00.582774  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:00.583004  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:00.877719  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:00.948225  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:01.065344  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:01.067794  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:01.375354  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:01.448227  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:01.561780  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:01.562960  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:01.875771  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:01.945881  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:01.966831  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:41:02.062563  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:02.062799  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:02.376547  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:02.447352  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:02.561779  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:02.562611  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:02.875580  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:02.946256  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:03.061831  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:03.062387  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:03.375870  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:03.446437  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:03.560976  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:03.561891  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:03.875202  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:03.947962  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:03.967037  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:41:04.061480  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:04.062379  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:04.375941  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:04.446238  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:04.562468  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:04.562877  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:04.875285  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:04.946660  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:05.062465  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:05.062886  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:05.376001  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:05.446912  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:05.561838  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:05.562615  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:05.875421  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:05.946657  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:06.061543  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:06.062589  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:06.375196  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:06.446960  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:06.466347  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:41:06.562063  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:06.562764  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:06.875596  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:06.946898  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:07.061921  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:07.063181  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:07.375810  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:07.446026  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:07.561428  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:07.562546  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:07.875172  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:07.946505  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:08.061499  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:08.062816  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:08.376019  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:08.446272  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:08.467168  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:41:08.562612  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:08.562946  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:08.876335  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:08.946735  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:09.061910  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:09.062619  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:09.375133  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:09.447206  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:09.561421  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:09.562389  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:09.876131  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:09.946813  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:10.062507  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:10.064607  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:10.375353  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:10.446904  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:10.562895  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:10.563990  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:10.875793  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:10.946554  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:10.967028  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:41:11.061932  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:11.063150  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:11.375830  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:11.446348  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:11.561653  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:11.563206  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:11.875920  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:11.946654  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:12.061917  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:12.062460  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:12.375786  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:12.446012  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:12.562540  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:12.562908  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:12.877867  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:12.946299  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:13.060991  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:13.062119  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:13.375793  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:13.445883  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:13.466546  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:41:13.561960  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:13.562612  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:13.875666  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:13.947427  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:14.061694  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:14.062464  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:14.376511  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:14.446294  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:14.562791  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:14.563547  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:14.875418  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:14.946605  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:15.062005  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:15.063964  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:15.377135  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:15.446681  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:15.562379  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:15.562700  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:15.877033  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:15.946231  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:15.966501  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:41:16.062211  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:16.063155  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:16.376819  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:16.446617  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:16.563906  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:16.565097  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:16.876051  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:16.946253  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:17.066866  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:17.067521  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:17.376509  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:17.446316  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:17.561816  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:17.562038  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:17.875656  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:17.946877  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:17.966970  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:41:18.061223  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:18.062175  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:18.376287  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:18.446551  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:18.561931  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:18.562843  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:18.875329  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:18.947103  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:19.061452  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:19.062306  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:19.375857  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:19.446215  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:19.561838  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:19.562763  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:19.875704  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:19.946970  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:19.967322  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:41:20.061991  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:20.063004  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:20.375819  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:20.446018  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:20.561673  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:20.562416  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:20.875770  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:20.946314  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:21.061298  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:21.062118  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:21.376133  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:21.446523  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:21.561547  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:21.562572  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:21.875087  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:21.946665  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:22.061448  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:22.062440  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:22.375879  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:22.446019  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:22.467332  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:41:22.561928  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:22.562802  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:22.876174  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:22.947177  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:23.061819  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:23.062482  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:23.375560  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:23.446172  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:23.560905  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:23.562464  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:23.875348  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:23.947319  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:24.060883  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:24.062369  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:24.376201  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:24.446773  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:24.562406  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:24.563400  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:24.876177  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:24.947005  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:24.969870  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:41:25.060859  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:25.061661  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:25.375277  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:25.447052  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:25.561034  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:25.562067  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:25.875854  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:25.946102  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:26.061201  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:26.062358  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:26.376390  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:26.446809  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:26.561912  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:26.562731  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:26.875687  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:26.946990  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:27.060915  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:27.061831  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:27.375963  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:27.446755  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:27.466656  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:41:27.561455  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:27.562776  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:27.875854  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:27.946628  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:28.061527  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:28.062880  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:28.376492  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:28.446888  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:28.561880  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:28.563128  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:28.876058  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:28.947461  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:29.061576  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:29.062985  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:29.375644  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:29.446816  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:29.561107  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:29.561942  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:29.875796  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:29.945754  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:29.966769  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:41:30.062213  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:30.062963  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:30.376763  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:30.446860  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:30.561690  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:30.562513  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:30.875970  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:30.945998  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:31.062104  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:31.063092  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:31.375520  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:31.446646  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:31.561126  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:31.562097  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:31.875721  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:31.946673  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:32.061200  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:32.061808  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:32.376029  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:32.446717  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:32.466655  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:41:32.561644  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:32.562929  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:32.875835  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:32.966393  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:32.984056  293537 node_ready.go:49] node "addons-971880" has status "Ready":"True"
	I0919 18:41:32.984085  293537 node_ready.go:38] duration metric: took 39.520822677s for node "addons-971880" to be "Ready" ...
	I0919 18:41:32.984096  293537 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 18:41:33.035725  293537 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-lzshk" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:33.085442  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:33.086727  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:33.401079  293537 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0919 18:41:33.401109  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:33.449540  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:33.562204  293537 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0919 18:41:33.562272  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:33.562938  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:33.879142  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:33.979993  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:34.083770  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:34.085152  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:34.397059  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:34.486406  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:34.549855  293537 pod_ready.go:93] pod "coredns-7c65d6cfc9-lzshk" in "kube-system" namespace has status "Ready":"True"
	I0919 18:41:34.549882  293537 pod_ready.go:82] duration metric: took 1.514119286s for pod "coredns-7c65d6cfc9-lzshk" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:34.549904  293537 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-971880" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:34.557730  293537 pod_ready.go:93] pod "etcd-addons-971880" in "kube-system" namespace has status "Ready":"True"
	I0919 18:41:34.557755  293537 pod_ready.go:82] duration metric: took 7.843669ms for pod "etcd-addons-971880" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:34.558059  293537 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-971880" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:34.564913  293537 pod_ready.go:93] pod "kube-apiserver-addons-971880" in "kube-system" namespace has status "Ready":"True"
	I0919 18:41:34.564937  293537 pod_ready.go:82] duration metric: took 6.858144ms for pod "kube-apiserver-addons-971880" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:34.564948  293537 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-971880" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:34.570244  293537 pod_ready.go:93] pod "kube-controller-manager-addons-971880" in "kube-system" namespace has status "Ready":"True"
	I0919 18:41:34.570267  293537 pod_ready.go:82] duration metric: took 5.311429ms for pod "kube-controller-manager-addons-971880" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:34.570281  293537 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pf8wk" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:34.587641  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:34.589693  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:34.607709  293537 pod_ready.go:93] pod "kube-proxy-pf8wk" in "kube-system" namespace has status "Ready":"True"
	I0919 18:41:34.607736  293537 pod_ready.go:82] duration metric: took 37.446262ms for pod "kube-proxy-pf8wk" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:34.607748  293537 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-971880" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:34.883869  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:34.946929  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:34.968272  293537 pod_ready.go:93] pod "kube-scheduler-addons-971880" in "kube-system" namespace has status "Ready":"True"
	I0919 18:41:34.968298  293537 pod_ready.go:82] duration metric: took 360.543214ms for pod "kube-scheduler-addons-971880" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:34.968310  293537 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:35.066332  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:35.067047  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:35.377071  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:35.446694  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:35.563116  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:35.564514  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:35.878270  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:35.947116  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:36.062668  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:36.064093  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:36.378169  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:36.446076  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:36.562355  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:36.563611  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:36.877416  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:36.946831  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:36.976090  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:37.070127  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:37.070708  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:37.378036  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:37.449197  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:37.572857  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:37.574137  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:37.878958  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:37.947687  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:38.066635  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:38.068196  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:38.379960  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:38.447036  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:38.566180  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:38.567164  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:38.878059  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:38.948415  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:39.065678  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:39.068345  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:39.382645  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:39.446403  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:39.474388  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:39.561643  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:39.562705  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:39.876574  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:39.948622  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:40.064799  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:40.070969  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:40.378517  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:40.447109  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:40.563248  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:40.564262  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:40.878488  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:40.947935  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:41.066945  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:41.068000  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:41.377261  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:41.447055  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:41.475670  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:41.564547  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:41.565894  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:41.877348  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:41.946812  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:42.063870  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:42.065853  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:42.378089  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:42.447279  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:42.562819  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:42.564637  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:42.877041  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:42.947027  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:43.063723  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:43.066318  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:43.378706  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:43.447494  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:43.475801  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:43.562902  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:43.565584  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:43.878005  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:43.958649  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:44.063440  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:44.064670  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:44.378042  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:44.446376  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:44.566188  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:44.567897  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:44.885274  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:44.948492  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:45.083599  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:45.091434  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:45.382104  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:45.479271  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:45.481202  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:45.565683  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:45.566749  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:45.877414  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:45.947825  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:46.065683  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:46.067689  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:46.377216  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:46.447142  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:46.564574  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:46.566078  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:46.879355  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:46.976594  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:47.062108  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:47.063081  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:47.377187  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:47.446380  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:47.563241  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:47.563925  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:47.877203  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:47.946716  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:47.974788  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:48.062852  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:48.063938  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:48.379041  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:48.446882  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:48.564580  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:48.567894  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:48.877326  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:48.947262  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:49.064573  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:49.065888  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:49.377713  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:49.446539  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:49.563561  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:49.564620  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:49.876718  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:49.946923  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:49.976834  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:50.062984  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:50.063275  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:50.378977  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:50.477858  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:50.562860  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:50.563269  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:50.877622  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:50.946366  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:51.061768  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:51.062839  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:51.379398  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:51.478496  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:51.564227  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:51.564662  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:51.877074  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:51.946553  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:52.062951  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:52.063955  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:52.377899  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:52.446850  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:52.479114  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:52.563898  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:52.565470  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:52.878227  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:52.947109  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:53.066828  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:53.066812  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:53.389853  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:53.450162  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:53.568765  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:53.569814  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:53.877173  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:53.947676  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:54.062677  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:54.064006  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:54.377637  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:54.447160  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:54.563690  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:54.565109  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:54.878630  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:54.949386  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:54.976344  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:55.065055  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:55.065708  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:55.377239  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:55.447929  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:55.565896  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:55.566378  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:55.877242  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:55.946425  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:56.062875  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:56.063179  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:56.379895  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:56.447076  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:56.562740  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:56.563473  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:56.877550  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:56.947753  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:57.067682  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:57.070082  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:57.377976  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:57.447187  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:57.475294  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:57.569051  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:57.570062  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:57.877048  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:57.983847  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:58.077525  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:58.079554  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:58.380087  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:58.446658  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:58.563211  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:58.564086  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:58.877276  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:58.946236  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:59.062712  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:59.062998  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:59.377963  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:59.446451  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:59.563498  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:59.564989  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:59.876502  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:59.947957  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:59.977211  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:00.121226  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:00.133316  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:00.391137  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:00.472275  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:00.566019  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:00.569802  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:00.877649  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:00.947625  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:01.066312  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:01.068223  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:01.377479  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:01.447220  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:01.563581  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:01.566404  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:01.877374  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:01.950613  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:01.979124  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:02.084459  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:02.085060  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:02.378285  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:02.447656  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:02.564352  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:02.566754  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:02.877376  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:02.979247  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:03.078289  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:03.078836  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:03.377708  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:03.447086  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:03.561944  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:03.563509  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:03.877703  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:03.950350  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:04.062096  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:04.064223  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:04.377278  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:04.446833  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:04.475318  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:04.562383  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:04.563641  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:04.884659  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:04.989733  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:05.061214  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:05.063030  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:05.377498  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:05.447815  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:05.565160  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:05.567761  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:05.876913  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:05.950444  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:06.063189  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:06.064164  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:06.379772  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:06.446607  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:06.478893  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:06.566184  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:06.567288  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:06.879373  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:06.948136  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:07.067070  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:07.072076  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:07.377008  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:07.446697  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:07.569679  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:07.571369  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:07.880635  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:07.947236  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:08.071737  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:08.077335  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:08.378546  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:08.447335  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:08.564632  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:08.565221  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:08.877848  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:08.946934  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:08.974974  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:09.064653  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:09.065797  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:09.377975  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:09.476947  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:09.563133  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:09.564689  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:09.876749  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:09.946248  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:10.062142  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:10.063644  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:10.377813  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:10.446860  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:10.562053  293537 kapi.go:107] duration metric: took 1m16.004535153s to wait for kubernetes.io/minikube-addons=registry ...
	I0919 18:42:10.562215  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:10.876987  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:10.946342  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:11.061971  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:11.377751  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:11.447271  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:11.480810  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:11.563706  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:11.877410  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:11.947282  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:12.063287  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:12.378511  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:12.446827  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:12.563797  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:12.877171  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:12.946514  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:13.063988  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:13.379347  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:13.450900  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:13.481526  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:13.573718  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:13.878016  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:13.954643  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:14.065257  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:14.379195  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:14.447732  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:14.566057  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:14.878940  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:14.947019  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:15.066043  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:15.377698  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:15.448279  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:15.564997  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:15.876343  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:15.946958  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:15.976796  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:16.063898  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:16.377681  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:16.477282  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:16.562688  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:16.878927  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:16.946784  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:17.063387  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:17.377333  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:17.446740  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:17.563508  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:17.883389  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:17.948827  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:17.985932  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:18.064777  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:18.395701  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:18.488133  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:18.563688  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:18.880224  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:18.947351  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:19.067004  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:19.378182  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:19.446917  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:19.562840  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:19.877728  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:19.948075  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:20.064123  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:20.377853  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:20.447732  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:20.480331  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:20.565156  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:20.878939  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:20.978062  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:21.062166  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:21.378624  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:21.447261  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:21.563076  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:21.876989  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:21.946848  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:22.068830  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:22.377684  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:22.484429  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:22.485223  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:22.578263  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:22.878134  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:22.947298  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:23.065838  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:23.376555  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:23.448395  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:23.565684  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:23.877495  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:23.951222  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:24.062074  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:24.377460  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:24.485068  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:24.488152  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:24.584713  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:24.876694  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:24.946971  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:25.062114  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:25.389522  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:25.447036  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:25.562186  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:25.876882  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:25.946314  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:26.062299  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:26.378575  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:26.463928  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:26.495554  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:26.568642  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:26.878105  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:26.946857  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:27.063120  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:27.378102  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:27.447089  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:27.562236  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:27.876843  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:27.945837  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:28.063213  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:28.378654  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:28.447524  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:28.562451  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:28.878401  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:28.947457  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:28.977975  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:29.063832  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:29.377289  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:29.446975  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:29.562465  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:29.877929  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:29.946408  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:30.063568  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:30.379021  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:30.449320  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:30.565273  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:30.880376  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:30.986980  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:31.063033  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:31.377706  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:31.448205  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:31.478392  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:31.565141  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:31.877461  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:31.946903  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:32.062732  293537 kapi.go:107] duration metric: took 1m37.505219255s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0919 18:42:32.376733  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:32.448562  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:32.879931  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:32.978367  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:33.377007  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:33.452566  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:33.880325  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:33.960368  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:33.974903  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:34.376634  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:34.447204  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:34.881901  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:34.946224  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:35.377760  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:35.446114  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:35.878736  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:35.947223  293537 kapi.go:107] duration metric: took 1m36.504361675s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0919 18:42:35.949426  293537 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-971880 cluster.
	I0919 18:42:35.951872  293537 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0919 18:42:35.953970  293537 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0919 18:42:35.975189  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:36.377260  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:36.877815  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:37.377370  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:37.888310  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:37.982936  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:38.386669  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:38.877270  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:39.377530  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:39.877499  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:40.378293  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:40.475670  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:40.877392  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:41.376692  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:41.878130  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:42.378347  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:42.878515  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:42.977798  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:43.377066  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:43.878894  293537 kapi.go:107] duration metric: took 1m49.006949754s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0919 18:42:43.881074  293537 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, nvidia-device-plugin, default-storageclass, ingress-dns, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0919 18:42:43.884077  293537 addons.go:510] duration metric: took 1m55.790054032s for enable addons: enabled=[cloud-spanner storage-provisioner nvidia-device-plugin default-storageclass ingress-dns metrics-server inspektor-gadget yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0919 18:42:43.983903  293537 pod_ready.go:93] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"True"
	I0919 18:42:43.983991  293537 pod_ready.go:82] duration metric: took 1m9.015672466s for pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace to be "Ready" ...
	I0919 18:42:43.984031  293537 pod_ready.go:39] duration metric: took 1m10.999895399s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 18:42:43.984651  293537 api_server.go:52] waiting for apiserver process to appear ...
	I0919 18:42:43.984805  293537 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 18:42:43.984924  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 18:42:44.038733  293537 cri.go:89] found id: "a6739fa07ff39070e523d4cee920e861b82fa4bd2f9be9a5c18fec0b8df87a99"
	I0919 18:42:44.038756  293537 cri.go:89] found id: ""
	I0919 18:42:44.038765  293537 logs.go:276] 1 containers: [a6739fa07ff39070e523d4cee920e861b82fa4bd2f9be9a5c18fec0b8df87a99]
	I0919 18:42:44.038822  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:42:44.043249  293537 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 18:42:44.043334  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 18:42:44.088606  293537 cri.go:89] found id: "1a7797ceebe32b244c5f1a224638afd9f24c176c6cc7d597addad656dde2d48b"
	I0919 18:42:44.088631  293537 cri.go:89] found id: ""
	I0919 18:42:44.088639  293537 logs.go:276] 1 containers: [1a7797ceebe32b244c5f1a224638afd9f24c176c6cc7d597addad656dde2d48b]
	I0919 18:42:44.088700  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:42:44.092415  293537 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 18:42:44.092495  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 18:42:44.135646  293537 cri.go:89] found id: "c57cc379e1c9a8e90a13a5c1580cdc3da69b0f13cd6bd3c34135fb0ade3a0cf4"
	I0919 18:42:44.135670  293537 cri.go:89] found id: ""
	I0919 18:42:44.135678  293537 logs.go:276] 1 containers: [c57cc379e1c9a8e90a13a5c1580cdc3da69b0f13cd6bd3c34135fb0ade3a0cf4]
	I0919 18:42:44.135735  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:42:44.139218  293537 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 18:42:44.139291  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 18:42:44.179758  293537 cri.go:89] found id: "d599c639765e1953306d0695fea53619d96374405a473fbb100111dab8d56569"
	I0919 18:42:44.179782  293537 cri.go:89] found id: ""
	I0919 18:42:44.179790  293537 logs.go:276] 1 containers: [d599c639765e1953306d0695fea53619d96374405a473fbb100111dab8d56569]
	I0919 18:42:44.179856  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:42:44.184338  293537 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 18:42:44.184432  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 18:42:44.223834  293537 cri.go:89] found id: "dc4aa79f1b326b392f0966ca9b9067fb674897d417ae8cf8b7dbef4fe5de203a"
	I0919 18:42:44.223868  293537 cri.go:89] found id: ""
	I0919 18:42:44.223877  293537 logs.go:276] 1 containers: [dc4aa79f1b326b392f0966ca9b9067fb674897d417ae8cf8b7dbef4fe5de203a]
	I0919 18:42:44.223947  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:42:44.227670  293537 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 18:42:44.227745  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 18:42:44.264952  293537 cri.go:89] found id: "4e8ba4e202807048a02e5bf0c0ced036a2a964bcaf7a91f53bf6080712052341"
	I0919 18:42:44.264974  293537 cri.go:89] found id: ""
	I0919 18:42:44.264982  293537 logs.go:276] 1 containers: [4e8ba4e202807048a02e5bf0c0ced036a2a964bcaf7a91f53bf6080712052341]
	I0919 18:42:44.265042  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:42:44.268932  293537 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 18:42:44.269034  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 18:42:44.307612  293537 cri.go:89] found id: "dcda5994fb9dad3bdb5086f8926c508213459cae9deb54923f4dffd2852f1b82"
	I0919 18:42:44.307635  293537 cri.go:89] found id: ""
	I0919 18:42:44.307644  293537 logs.go:276] 1 containers: [dcda5994fb9dad3bdb5086f8926c508213459cae9deb54923f4dffd2852f1b82]
	I0919 18:42:44.307706  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:42:44.311797  293537 logs.go:123] Gathering logs for container status ...
	I0919 18:42:44.311840  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 18:42:44.363577  293537 logs.go:123] Gathering logs for kubelet ...
	I0919 18:42:44.363608  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0919 18:42:44.393941  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:32 addons-971880 kubelet[1465]: W0919 18:41:32.997203    1465 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-971880" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-971880' and this object
	W0919 18:42:44.394218  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:32 addons-971880 kubelet[1465]: E0919 18:41:32.997256    1465 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:44.394411  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.016421    1465 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-971880" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-971880' and this object
	W0919 18:42:44.394643  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.016471    1465 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:44.394822  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.016528    1465 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-971880" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-971880' and this object
	W0919 18:42:44.395044  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.016541    1465 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:44.395209  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.016587    1465 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-971880" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-971880' and this object
	W0919 18:42:44.395414  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.016607    1465 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:44.395601  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.016654    1465 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-971880" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-971880' and this object
	W0919 18:42:44.395828  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.016666    1465 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:44.396004  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.033708    1465 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-971880" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-971880' and this object
	W0919 18:42:44.396232  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.033776    1465 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:44.396400  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.078574    1465 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-971880" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-971880' and this object
	W0919 18:42:44.396607  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.078623    1465 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	I0919 18:42:44.454727  293537 logs.go:123] Gathering logs for describe nodes ...
	I0919 18:42:44.454772  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 18:42:44.643066  293537 logs.go:123] Gathering logs for etcd [1a7797ceebe32b244c5f1a224638afd9f24c176c6cc7d597addad656dde2d48b] ...
	I0919 18:42:44.643099  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a7797ceebe32b244c5f1a224638afd9f24c176c6cc7d597addad656dde2d48b"
	I0919 18:42:44.698468  293537 logs.go:123] Gathering logs for kube-scheduler [d599c639765e1953306d0695fea53619d96374405a473fbb100111dab8d56569] ...
	I0919 18:42:44.698502  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d599c639765e1953306d0695fea53619d96374405a473fbb100111dab8d56569"
	I0919 18:42:44.743288  293537 logs.go:123] Gathering logs for kube-controller-manager [4e8ba4e202807048a02e5bf0c0ced036a2a964bcaf7a91f53bf6080712052341] ...
	I0919 18:42:44.743317  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e8ba4e202807048a02e5bf0c0ced036a2a964bcaf7a91f53bf6080712052341"
	I0919 18:42:44.813056  293537 logs.go:123] Gathering logs for kindnet [dcda5994fb9dad3bdb5086f8926c508213459cae9deb54923f4dffd2852f1b82] ...
	I0919 18:42:44.813098  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcda5994fb9dad3bdb5086f8926c508213459cae9deb54923f4dffd2852f1b82"
	I0919 18:42:44.861228  293537 logs.go:123] Gathering logs for CRI-O ...
	I0919 18:42:44.861256  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 18:42:44.957892  293537 logs.go:123] Gathering logs for dmesg ...
	I0919 18:42:44.957933  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 18:42:44.974633  293537 logs.go:123] Gathering logs for kube-apiserver [a6739fa07ff39070e523d4cee920e861b82fa4bd2f9be9a5c18fec0b8df87a99] ...
	I0919 18:42:44.974662  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6739fa07ff39070e523d4cee920e861b82fa4bd2f9be9a5c18fec0b8df87a99"
	I0919 18:42:45.074514  293537 logs.go:123] Gathering logs for coredns [c57cc379e1c9a8e90a13a5c1580cdc3da69b0f13cd6bd3c34135fb0ade3a0cf4] ...
	I0919 18:42:45.075778  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c57cc379e1c9a8e90a13a5c1580cdc3da69b0f13cd6bd3c34135fb0ade3a0cf4"
	I0919 18:42:45.206965  293537 logs.go:123] Gathering logs for kube-proxy [dc4aa79f1b326b392f0966ca9b9067fb674897d417ae8cf8b7dbef4fe5de203a] ...
	I0919 18:42:45.207154  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc4aa79f1b326b392f0966ca9b9067fb674897d417ae8cf8b7dbef4fe5de203a"
	I0919 18:42:45.281778  293537 out.go:358] Setting ErrFile to fd 2...
	I0919 18:42:45.281818  293537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0919 18:42:45.281948  293537 out.go:270] X Problems detected in kubelet:
	W0919 18:42:45.281964  293537 out.go:270]   Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.016666    1465 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:45.281982  293537 out.go:270]   Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.033708    1465 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-971880" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-971880' and this object
	W0919 18:42:45.282001  293537 out.go:270]   Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.033776    1465 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:45.282028  293537 out.go:270]   Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.078574    1465 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-971880" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-971880' and this object
	W0919 18:42:45.282048  293537 out.go:270]   Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.078623    1465 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	I0919 18:42:45.282076  293537 out.go:358] Setting ErrFile to fd 2...
	I0919 18:42:45.282086  293537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:42:55.283244  293537 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 18:42:55.296761  293537 api_server.go:72] duration metric: took 2m7.20311709s to wait for apiserver process to appear ...
	I0919 18:42:55.296785  293537 api_server.go:88] waiting for apiserver healthz status ...
	I0919 18:42:55.297414  293537 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 18:42:55.297493  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 18:42:55.343738  293537 cri.go:89] found id: "a6739fa07ff39070e523d4cee920e861b82fa4bd2f9be9a5c18fec0b8df87a99"
	I0919 18:42:55.343760  293537 cri.go:89] found id: ""
	I0919 18:42:55.343768  293537 logs.go:276] 1 containers: [a6739fa07ff39070e523d4cee920e861b82fa4bd2f9be9a5c18fec0b8df87a99]
	I0919 18:42:55.343824  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:42:55.348178  293537 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 18:42:55.348259  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 18:42:55.387321  293537 cri.go:89] found id: "1a7797ceebe32b244c5f1a224638afd9f24c176c6cc7d597addad656dde2d48b"
	I0919 18:42:55.387344  293537 cri.go:89] found id: ""
	I0919 18:42:55.387352  293537 logs.go:276] 1 containers: [1a7797ceebe32b244c5f1a224638afd9f24c176c6cc7d597addad656dde2d48b]
	I0919 18:42:55.387410  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:42:55.391715  293537 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 18:42:55.391785  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 18:42:55.430903  293537 cri.go:89] found id: "c57cc379e1c9a8e90a13a5c1580cdc3da69b0f13cd6bd3c34135fb0ade3a0cf4"
	I0919 18:42:55.430932  293537 cri.go:89] found id: ""
	I0919 18:42:55.430941  293537 logs.go:276] 1 containers: [c57cc379e1c9a8e90a13a5c1580cdc3da69b0f13cd6bd3c34135fb0ade3a0cf4]
	I0919 18:42:55.431002  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:42:55.434917  293537 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 18:42:55.434994  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 18:42:55.477899  293537 cri.go:89] found id: "d599c639765e1953306d0695fea53619d96374405a473fbb100111dab8d56569"
	I0919 18:42:55.477921  293537 cri.go:89] found id: ""
	I0919 18:42:55.477929  293537 logs.go:276] 1 containers: [d599c639765e1953306d0695fea53619d96374405a473fbb100111dab8d56569]
	I0919 18:42:55.477984  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:42:55.481536  293537 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 18:42:55.481605  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 18:42:55.519995  293537 cri.go:89] found id: "dc4aa79f1b326b392f0966ca9b9067fb674897d417ae8cf8b7dbef4fe5de203a"
	I0919 18:42:55.520019  293537 cri.go:89] found id: ""
	I0919 18:42:55.520027  293537 logs.go:276] 1 containers: [dc4aa79f1b326b392f0966ca9b9067fb674897d417ae8cf8b7dbef4fe5de203a]
	I0919 18:42:55.520084  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:42:55.523730  293537 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 18:42:55.523808  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 18:42:55.563154  293537 cri.go:89] found id: "4e8ba4e202807048a02e5bf0c0ced036a2a964bcaf7a91f53bf6080712052341"
	I0919 18:42:55.563178  293537 cri.go:89] found id: ""
	I0919 18:42:55.563186  293537 logs.go:276] 1 containers: [4e8ba4e202807048a02e5bf0c0ced036a2a964bcaf7a91f53bf6080712052341]
	I0919 18:42:55.563270  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:42:55.567011  293537 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 18:42:55.567115  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 18:42:55.606868  293537 cri.go:89] found id: "dcda5994fb9dad3bdb5086f8926c508213459cae9deb54923f4dffd2852f1b82"
	I0919 18:42:55.606892  293537 cri.go:89] found id: ""
	I0919 18:42:55.606900  293537 logs.go:276] 1 containers: [dcda5994fb9dad3bdb5086f8926c508213459cae9deb54923f4dffd2852f1b82]
	I0919 18:42:55.606979  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:42:55.610547  293537 logs.go:123] Gathering logs for dmesg ...
	I0919 18:42:55.610575  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 18:42:55.626573  293537 logs.go:123] Gathering logs for kube-apiserver [a6739fa07ff39070e523d4cee920e861b82fa4bd2f9be9a5c18fec0b8df87a99] ...
	I0919 18:42:55.626606  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6739fa07ff39070e523d4cee920e861b82fa4bd2f9be9a5c18fec0b8df87a99"
	I0919 18:42:55.694807  293537 logs.go:123] Gathering logs for etcd [1a7797ceebe32b244c5f1a224638afd9f24c176c6cc7d597addad656dde2d48b] ...
	I0919 18:42:55.694847  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a7797ceebe32b244c5f1a224638afd9f24c176c6cc7d597addad656dde2d48b"
	I0919 18:42:55.746553  293537 logs.go:123] Gathering logs for coredns [c57cc379e1c9a8e90a13a5c1580cdc3da69b0f13cd6bd3c34135fb0ade3a0cf4] ...
	I0919 18:42:55.746589  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c57cc379e1c9a8e90a13a5c1580cdc3da69b0f13cd6bd3c34135fb0ade3a0cf4"
	I0919 18:42:55.790244  293537 logs.go:123] Gathering logs for kube-controller-manager [4e8ba4e202807048a02e5bf0c0ced036a2a964bcaf7a91f53bf6080712052341] ...
	I0919 18:42:55.790314  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e8ba4e202807048a02e5bf0c0ced036a2a964bcaf7a91f53bf6080712052341"
	I0919 18:42:55.858123  293537 logs.go:123] Gathering logs for kindnet [dcda5994fb9dad3bdb5086f8926c508213459cae9deb54923f4dffd2852f1b82] ...
	I0919 18:42:55.858161  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcda5994fb9dad3bdb5086f8926c508213459cae9deb54923f4dffd2852f1b82"
	I0919 18:42:55.899740  293537 logs.go:123] Gathering logs for kubelet ...
	I0919 18:42:55.899779  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0919 18:42:55.926340  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:32 addons-971880 kubelet[1465]: W0919 18:41:32.997203    1465 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-971880" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-971880' and this object
	W0919 18:42:55.926585  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:32 addons-971880 kubelet[1465]: E0919 18:41:32.997256    1465 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:55.926774  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.016421    1465 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-971880" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-971880' and this object
	W0919 18:42:55.927013  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.016471    1465 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:55.927192  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.016528    1465 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-971880" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-971880' and this object
	W0919 18:42:55.927416  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.016541    1465 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:55.927579  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.016587    1465 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-971880" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-971880' and this object
	W0919 18:42:55.927784  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.016607    1465 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:55.927976  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.016654    1465 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-971880" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-971880' and this object
	W0919 18:42:55.928213  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.016666    1465 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:55.928388  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.033708    1465 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-971880" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-971880' and this object
	W0919 18:42:55.928600  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.033776    1465 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:55.928771  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.078574    1465 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-971880" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-971880' and this object
	W0919 18:42:55.928980  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.078623    1465 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	I0919 18:42:55.987254  293537 logs.go:123] Gathering logs for describe nodes ...
	I0919 18:42:55.987289  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 18:42:56.137844  293537 logs.go:123] Gathering logs for kube-scheduler [d599c639765e1953306d0695fea53619d96374405a473fbb100111dab8d56569] ...
	I0919 18:42:56.137882  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d599c639765e1953306d0695fea53619d96374405a473fbb100111dab8d56569"
	I0919 18:42:56.191991  293537 logs.go:123] Gathering logs for kube-proxy [dc4aa79f1b326b392f0966ca9b9067fb674897d417ae8cf8b7dbef4fe5de203a] ...
	I0919 18:42:56.192025  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc4aa79f1b326b392f0966ca9b9067fb674897d417ae8cf8b7dbef4fe5de203a"
	I0919 18:42:56.234794  293537 logs.go:123] Gathering logs for CRI-O ...
	I0919 18:42:56.234827  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 18:42:56.325587  293537 logs.go:123] Gathering logs for container status ...
	I0919 18:42:56.325626  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 18:42:56.376152  293537 out.go:358] Setting ErrFile to fd 2...
	I0919 18:42:56.376180  293537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0919 18:42:56.376244  293537 out.go:270] X Problems detected in kubelet:
	W0919 18:42:56.376253  293537 out.go:270]   Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.016666    1465 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:56.376263  293537 out.go:270]   Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.033708    1465 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-971880" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-971880' and this object
	W0919 18:42:56.376271  293537 out.go:270]   Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.033776    1465 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:56.376278  293537 out.go:270]   Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.078574    1465 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-971880" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-971880' and this object
	W0919 18:42:56.376285  293537 out.go:270]   Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.078623    1465 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	I0919 18:42:56.376411  293537 out.go:358] Setting ErrFile to fd 2...
	I0919 18:42:56.376419  293537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:43:06.376913  293537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 18:43:06.385497  293537 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 18:43:06.387607  293537 api_server.go:141] control plane version: v1.31.1
	I0919 18:43:06.387660  293537 api_server.go:131] duration metric: took 11.090867395s to wait for apiserver health ...
	I0919 18:43:06.387671  293537 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 18:43:06.387696  293537 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 18:43:06.387762  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 18:43:06.425666  293537 cri.go:89] found id: "a6739fa07ff39070e523d4cee920e861b82fa4bd2f9be9a5c18fec0b8df87a99"
	I0919 18:43:06.425689  293537 cri.go:89] found id: ""
	I0919 18:43:06.425697  293537 logs.go:276] 1 containers: [a6739fa07ff39070e523d4cee920e861b82fa4bd2f9be9a5c18fec0b8df87a99]
	I0919 18:43:06.425753  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:43:06.429431  293537 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 18:43:06.429509  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 18:43:06.466851  293537 cri.go:89] found id: "1a7797ceebe32b244c5f1a224638afd9f24c176c6cc7d597addad656dde2d48b"
	I0919 18:43:06.466875  293537 cri.go:89] found id: ""
	I0919 18:43:06.466883  293537 logs.go:276] 1 containers: [1a7797ceebe32b244c5f1a224638afd9f24c176c6cc7d597addad656dde2d48b]
	I0919 18:43:06.466939  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:43:06.470472  293537 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 18:43:06.470544  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 18:43:06.509833  293537 cri.go:89] found id: "c57cc379e1c9a8e90a13a5c1580cdc3da69b0f13cd6bd3c34135fb0ade3a0cf4"
	I0919 18:43:06.509856  293537 cri.go:89] found id: ""
	I0919 18:43:06.509865  293537 logs.go:276] 1 containers: [c57cc379e1c9a8e90a13a5c1580cdc3da69b0f13cd6bd3c34135fb0ade3a0cf4]
	I0919 18:43:06.509923  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:43:06.513953  293537 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 18:43:06.514030  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 18:43:06.554749  293537 cri.go:89] found id: "d599c639765e1953306d0695fea53619d96374405a473fbb100111dab8d56569"
	I0919 18:43:06.554774  293537 cri.go:89] found id: ""
	I0919 18:43:06.554783  293537 logs.go:276] 1 containers: [d599c639765e1953306d0695fea53619d96374405a473fbb100111dab8d56569]
	I0919 18:43:06.554845  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:43:06.558418  293537 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 18:43:06.558487  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 18:43:06.597281  293537 cri.go:89] found id: "dc4aa79f1b326b392f0966ca9b9067fb674897d417ae8cf8b7dbef4fe5de203a"
	I0919 18:43:06.597304  293537 cri.go:89] found id: ""
	I0919 18:43:06.597312  293537 logs.go:276] 1 containers: [dc4aa79f1b326b392f0966ca9b9067fb674897d417ae8cf8b7dbef4fe5de203a]
	I0919 18:43:06.597390  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:43:06.600882  293537 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 18:43:06.600987  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 18:43:06.640680  293537 cri.go:89] found id: "4e8ba4e202807048a02e5bf0c0ced036a2a964bcaf7a91f53bf6080712052341"
	I0919 18:43:06.640705  293537 cri.go:89] found id: ""
	I0919 18:43:06.640713  293537 logs.go:276] 1 containers: [4e8ba4e202807048a02e5bf0c0ced036a2a964bcaf7a91f53bf6080712052341]
	I0919 18:43:06.640779  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:43:06.644382  293537 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 18:43:06.644491  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 18:43:06.696347  293537 cri.go:89] found id: "dcda5994fb9dad3bdb5086f8926c508213459cae9deb54923f4dffd2852f1b82"
	I0919 18:43:06.696373  293537 cri.go:89] found id: ""
	I0919 18:43:06.696381  293537 logs.go:276] 1 containers: [dcda5994fb9dad3bdb5086f8926c508213459cae9deb54923f4dffd2852f1b82]
	I0919 18:43:06.696436  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:43:06.700014  293537 logs.go:123] Gathering logs for dmesg ...
	I0919 18:43:06.700041  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 18:43:06.720003  293537 logs.go:123] Gathering logs for describe nodes ...
	I0919 18:43:06.720085  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 18:43:06.860572  293537 logs.go:123] Gathering logs for kube-apiserver [a6739fa07ff39070e523d4cee920e861b82fa4bd2f9be9a5c18fec0b8df87a99] ...
	I0919 18:43:06.860621  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6739fa07ff39070e523d4cee920e861b82fa4bd2f9be9a5c18fec0b8df87a99"
	I0919 18:43:06.916995  293537 logs.go:123] Gathering logs for kube-proxy [dc4aa79f1b326b392f0966ca9b9067fb674897d417ae8cf8b7dbef4fe5de203a] ...
	I0919 18:43:06.917032  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc4aa79f1b326b392f0966ca9b9067fb674897d417ae8cf8b7dbef4fe5de203a"
	I0919 18:43:06.956031  293537 logs.go:123] Gathering logs for kubelet ...
	I0919 18:43:06.956059  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0919 18:43:06.980472  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:32 addons-971880 kubelet[1465]: W0919 18:41:32.997203    1465 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-971880" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-971880' and this object
	W0919 18:43:06.980836  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:32 addons-971880 kubelet[1465]: E0919 18:41:32.997256    1465 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:43:06.981031  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.016421    1465 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-971880" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-971880' and this object
	W0919 18:43:06.981267  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.016471    1465 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:43:06.981447  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.016528    1465 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-971880" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-971880' and this object
	W0919 18:43:06.981668  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.016541    1465 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:43:06.981833  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.016587    1465 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-971880" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-971880' and this object
	W0919 18:43:06.982037  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.016607    1465 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:43:06.982224  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.016654    1465 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-971880" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-971880' and this object
	W0919 18:43:06.982461  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.016666    1465 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:43:06.982633  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.033708    1465 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-971880" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-971880' and this object
	W0919 18:43:06.982849  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.033776    1465 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:43:06.983016  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.078574    1465 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-971880" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-971880' and this object
	W0919 18:43:06.983221  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.078623    1465 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	I0919 18:43:07.042579  293537 logs.go:123] Gathering logs for etcd [1a7797ceebe32b244c5f1a224638afd9f24c176c6cc7d597addad656dde2d48b] ...
	I0919 18:43:07.042616  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a7797ceebe32b244c5f1a224638afd9f24c176c6cc7d597addad656dde2d48b"
	I0919 18:43:07.101867  293537 logs.go:123] Gathering logs for coredns [c57cc379e1c9a8e90a13a5c1580cdc3da69b0f13cd6bd3c34135fb0ade3a0cf4] ...
	I0919 18:43:07.101904  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c57cc379e1c9a8e90a13a5c1580cdc3da69b0f13cd6bd3c34135fb0ade3a0cf4"
	I0919 18:43:07.146299  293537 logs.go:123] Gathering logs for kube-scheduler [d599c639765e1953306d0695fea53619d96374405a473fbb100111dab8d56569] ...
	I0919 18:43:07.146391  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d599c639765e1953306d0695fea53619d96374405a473fbb100111dab8d56569"
	I0919 18:43:07.195506  293537 logs.go:123] Gathering logs for kube-controller-manager [4e8ba4e202807048a02e5bf0c0ced036a2a964bcaf7a91f53bf6080712052341] ...
	I0919 18:43:07.195545  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e8ba4e202807048a02e5bf0c0ced036a2a964bcaf7a91f53bf6080712052341"
	I0919 18:43:07.269552  293537 logs.go:123] Gathering logs for kindnet [dcda5994fb9dad3bdb5086f8926c508213459cae9deb54923f4dffd2852f1b82] ...
	I0919 18:43:07.269590  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcda5994fb9dad3bdb5086f8926c508213459cae9deb54923f4dffd2852f1b82"
	I0919 18:43:07.315873  293537 logs.go:123] Gathering logs for CRI-O ...
	I0919 18:43:07.315908  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 18:43:07.406127  293537 logs.go:123] Gathering logs for container status ...
	I0919 18:43:07.406168  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 18:43:07.460453  293537 out.go:358] Setting ErrFile to fd 2...
	I0919 18:43:07.460483  293537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0919 18:43:07.460563  293537 out.go:270] X Problems detected in kubelet:
	W0919 18:43:07.460581  293537 out.go:270]   Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.016666    1465 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:43:07.460590  293537 out.go:270]   Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.033708    1465 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-971880" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-971880' and this object
	W0919 18:43:07.460610  293537 out.go:270]   Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.033776    1465 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:43:07.460616  293537 out.go:270]   Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.078574    1465 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-971880" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-971880' and this object
	W0919 18:43:07.460626  293537 out.go:270]   Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.078623    1465 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	I0919 18:43:07.460633  293537 out.go:358] Setting ErrFile to fd 2...
	I0919 18:43:07.460640  293537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:43:17.474173  293537 system_pods.go:59] 18 kube-system pods found
	I0919 18:43:17.474216  293537 system_pods.go:61] "coredns-7c65d6cfc9-lzshk" [fa76a4be-7a2f-482a-bb9a-f8b9caf2eed4] Running
	I0919 18:43:17.474223  293537 system_pods.go:61] "csi-hostpath-attacher-0" [e4afe744-fcb9-4ef1-83bc-7da6426a009e] Running
	I0919 18:43:17.474228  293537 system_pods.go:61] "csi-hostpath-resizer-0" [35bf4614-c53f-4a64-ba65-c4d2585a4618] Running
	I0919 18:43:17.474254  293537 system_pods.go:61] "csi-hostpathplugin-f4lvd" [c4e2104a-24a2-4d2b-982f-90c367e0f6f5] Running
	I0919 18:43:17.474265  293537 system_pods.go:61] "etcd-addons-971880" [48b082ac-da22-4582-a616-c7fc480b4ab7] Running
	I0919 18:43:17.474269  293537 system_pods.go:61] "kindnet-k2v8g" [0e23ba7f-3c08-474e-a24d-b217d7ad4fff] Running
	I0919 18:43:17.474273  293537 system_pods.go:61] "kube-apiserver-addons-971880" [2b208d09-ed22-4147-a82a-c346c0576a72] Running
	I0919 18:43:17.474278  293537 system_pods.go:61] "kube-controller-manager-addons-971880" [ef368deb-bcae-4de9-9cc2-02cce640782e] Running
	I0919 18:43:17.474288  293537 system_pods.go:61] "kube-ingress-dns-minikube" [afb5e949-2f5b-462a-89a2-809679640b8d] Running
	I0919 18:43:17.474292  293537 system_pods.go:61] "kube-proxy-pf8wk" [3daa047c-3145-421d-b44a-a991266a805e] Running
	I0919 18:43:17.474301  293537 system_pods.go:61] "kube-scheduler-addons-971880" [147489f6-4fd9-4831-8bc8-c03b9624170f] Running
	I0919 18:43:17.474305  293537 system_pods.go:61] "metrics-server-84c5f94fbc-jrbzm" [4dcd9c96-80a7-42f2-86ca-69d052a20c31] Running
	I0919 18:43:17.474312  293537 system_pods.go:61] "nvidia-device-plugin-daemonset-6b6sb" [d2508241-1d3e-43e2-b635-ccd577d441ef] Running
	I0919 18:43:17.474316  293537 system_pods.go:61] "registry-66c9cd494c-zjfvp" [95228612-f951-44f9-ac40-a54760497790] Running
	I0919 18:43:17.474331  293537 system_pods.go:61] "registry-proxy-mn6mx" [384cadea-3e7f-4b57-8edb-f51b9f4dde24] Running
	I0919 18:43:17.474337  293537 system_pods.go:61] "snapshot-controller-56fcc65765-hqvnz" [e4649166-f708-403f-b875-0777c7dc2409] Running
	I0919 18:43:17.474340  293537 system_pods.go:61] "snapshot-controller-56fcc65765-jbx4b" [08c94f64-7807-4552-b598-624dd9ca5fad] Running
	I0919 18:43:17.474344  293537 system_pods.go:61] "storage-provisioner" [a5319758-9b7e-4434-b3bb-2abf6f5f5a05] Running
	I0919 18:43:17.474350  293537 system_pods.go:74] duration metric: took 11.086673196s to wait for pod list to return data ...
	I0919 18:43:17.474360  293537 default_sa.go:34] waiting for default service account to be created ...
	I0919 18:43:17.476991  293537 default_sa.go:45] found service account: "default"
	I0919 18:43:17.477019  293537 default_sa.go:55] duration metric: took 2.651822ms for default service account to be created ...
	I0919 18:43:17.477031  293537 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 18:43:17.487749  293537 system_pods.go:86] 18 kube-system pods found
	I0919 18:43:17.487788  293537 system_pods.go:89] "coredns-7c65d6cfc9-lzshk" [fa76a4be-7a2f-482a-bb9a-f8b9caf2eed4] Running
	I0919 18:43:17.487838  293537 system_pods.go:89] "csi-hostpath-attacher-0" [e4afe744-fcb9-4ef1-83bc-7da6426a009e] Running
	I0919 18:43:17.487852  293537 system_pods.go:89] "csi-hostpath-resizer-0" [35bf4614-c53f-4a64-ba65-c4d2585a4618] Running
	I0919 18:43:17.487857  293537 system_pods.go:89] "csi-hostpathplugin-f4lvd" [c4e2104a-24a2-4d2b-982f-90c367e0f6f5] Running
	I0919 18:43:17.487865  293537 system_pods.go:89] "etcd-addons-971880" [48b082ac-da22-4582-a616-c7fc480b4ab7] Running
	I0919 18:43:17.487875  293537 system_pods.go:89] "kindnet-k2v8g" [0e23ba7f-3c08-474e-a24d-b217d7ad4fff] Running
	I0919 18:43:17.487881  293537 system_pods.go:89] "kube-apiserver-addons-971880" [2b208d09-ed22-4147-a82a-c346c0576a72] Running
	I0919 18:43:17.487889  293537 system_pods.go:89] "kube-controller-manager-addons-971880" [ef368deb-bcae-4de9-9cc2-02cce640782e] Running
	I0919 18:43:17.487896  293537 system_pods.go:89] "kube-ingress-dns-minikube" [afb5e949-2f5b-462a-89a2-809679640b8d] Running
	I0919 18:43:17.487914  293537 system_pods.go:89] "kube-proxy-pf8wk" [3daa047c-3145-421d-b44a-a991266a805e] Running
	I0919 18:43:17.487927  293537 system_pods.go:89] "kube-scheduler-addons-971880" [147489f6-4fd9-4831-8bc8-c03b9624170f] Running
	I0919 18:43:17.487932  293537 system_pods.go:89] "metrics-server-84c5f94fbc-jrbzm" [4dcd9c96-80a7-42f2-86ca-69d052a20c31] Running
	I0919 18:43:17.487948  293537 system_pods.go:89] "nvidia-device-plugin-daemonset-6b6sb" [d2508241-1d3e-43e2-b635-ccd577d441ef] Running
	I0919 18:43:17.487959  293537 system_pods.go:89] "registry-66c9cd494c-zjfvp" [95228612-f951-44f9-ac40-a54760497790] Running
	I0919 18:43:17.487964  293537 system_pods.go:89] "registry-proxy-mn6mx" [384cadea-3e7f-4b57-8edb-f51b9f4dde24] Running
	I0919 18:43:17.487969  293537 system_pods.go:89] "snapshot-controller-56fcc65765-hqvnz" [e4649166-f708-403f-b875-0777c7dc2409] Running
	I0919 18:43:17.487975  293537 system_pods.go:89] "snapshot-controller-56fcc65765-jbx4b" [08c94f64-7807-4552-b598-624dd9ca5fad] Running
	I0919 18:43:17.487979  293537 system_pods.go:89] "storage-provisioner" [a5319758-9b7e-4434-b3bb-2abf6f5f5a05] Running
	I0919 18:43:17.487987  293537 system_pods.go:126] duration metric: took 10.951104ms to wait for k8s-apps to be running ...
	I0919 18:43:17.488020  293537 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 18:43:17.488142  293537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 18:43:17.501314  293537 system_svc.go:56] duration metric: took 13.293118ms WaitForService to wait for kubelet
	I0919 18:43:17.501349  293537 kubeadm.go:582] duration metric: took 2m29.407710689s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 18:43:17.501369  293537 node_conditions.go:102] verifying NodePressure condition ...
	I0919 18:43:17.504944  293537 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0919 18:43:17.504984  293537 node_conditions.go:123] node cpu capacity is 2
	I0919 18:43:17.504998  293537 node_conditions.go:105] duration metric: took 3.620313ms to run NodePressure ...
	I0919 18:43:17.505009  293537 start.go:241] waiting for startup goroutines ...
	I0919 18:43:17.505016  293537 start.go:246] waiting for cluster config update ...
	I0919 18:43:17.505032  293537 start.go:255] writing updated cluster config ...
	I0919 18:43:17.505333  293537 ssh_runner.go:195] Run: rm -f paused
	I0919 18:43:17.844712  293537 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0919 18:43:17.848004  293537 out.go:177] * Done! kubectl is now configured to use "addons-971880" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 19 18:55:07 addons-971880 crio[951]: time="2024-09-19 18:55:07.866270685Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 19 18:55:07 addons-971880 crio[951]: time="2024-09-19 18:55:07.885362714Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/d429238332eb0b406feae98adeb6a4b57854cc7176d0655d68623c4c835edc67/merged/etc/passwd: no such file or directory"
	Sep 19 18:55:07 addons-971880 crio[951]: time="2024-09-19 18:55:07.885556281Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/d429238332eb0b406feae98adeb6a4b57854cc7176d0655d68623c4c835edc67/merged/etc/group: no such file or directory"
	Sep 19 18:55:07 addons-971880 crio[951]: time="2024-09-19 18:55:07.928583420Z" level=info msg="Created container 2d8a29047aa75e0313cea0c8e7406f33706872053af950f6859299e13b1427f7: default/hello-world-app-55bf9c44b4-qvhwn/hello-world-app" id=dd9007a7-d392-43d0-b9b1-8897a3786228 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 19 18:55:07 addons-971880 crio[951]: time="2024-09-19 18:55:07.929412441Z" level=info msg="Starting container: 2d8a29047aa75e0313cea0c8e7406f33706872053af950f6859299e13b1427f7" id=70c6ef62-5296-4a67-8b81-ca68a0f2a9df name=/runtime.v1.RuntimeService/StartContainer
	Sep 19 18:55:07 addons-971880 crio[951]: time="2024-09-19 18:55:07.937075548Z" level=info msg="Started container" PID=8036 containerID=2d8a29047aa75e0313cea0c8e7406f33706872053af950f6859299e13b1427f7 description=default/hello-world-app-55bf9c44b4-qvhwn/hello-world-app id=70c6ef62-5296-4a67-8b81-ca68a0f2a9df name=/runtime.v1.RuntimeService/StartContainer sandboxID=ee31dc802b569e48a50c4f8e54bf076109c15687a6cc075c812018c5ab968ee7
	Sep 19 18:55:08 addons-971880 crio[951]: time="2024-09-19 18:55:08.576627457Z" level=info msg="Removing container: 0cc2661b051ab76c8837f265b2e201e10cd5f5427d89155148adc8b2051065bc" id=6b738187-4821-4545-9892-a96f35577cf3 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 19 18:55:08 addons-971880 crio[951]: time="2024-09-19 18:55:08.599565038Z" level=info msg="Removed container 0cc2661b051ab76c8837f265b2e201e10cd5f5427d89155148adc8b2051065bc: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=6b738187-4821-4545-9892-a96f35577cf3 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 19 18:55:10 addons-971880 crio[951]: time="2024-09-19 18:55:10.332366071Z" level=info msg="Stopping container: d355220e4e1c0b8d48ac53159e748820425845697b09618d2bebb72ac41595da (timeout: 2s)" id=80f3f3e6-cd80-428d-a7c2-8970627d6e36 name=/runtime.v1.RuntimeService/StopContainer
	Sep 19 18:55:11 addons-971880 crio[951]: time="2024-09-19 18:55:11.832756125Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1741254e-0250-4e6c-9acf-12579dcc567e name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:55:11 addons-971880 crio[951]: time="2024-09-19 18:55:11.832991251Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=1741254e-0250-4e6c-9acf-12579dcc567e name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:55:12 addons-971880 crio[951]: time="2024-09-19 18:55:12.339431470Z" level=warning msg="Stopping container d355220e4e1c0b8d48ac53159e748820425845697b09618d2bebb72ac41595da with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=80f3f3e6-cd80-428d-a7c2-8970627d6e36 name=/runtime.v1.RuntimeService/StopContainer
	Sep 19 18:55:12 addons-971880 conmon[4697]: conmon d355220e4e1c0b8d48ac <ninfo>: container 4708 exited with status 137
	Sep 19 18:55:12 addons-971880 crio[951]: time="2024-09-19 18:55:12.474891400Z" level=info msg="Stopped container d355220e4e1c0b8d48ac53159e748820425845697b09618d2bebb72ac41595da: ingress-nginx/ingress-nginx-controller-bc57996ff-nsltd/controller" id=80f3f3e6-cd80-428d-a7c2-8970627d6e36 name=/runtime.v1.RuntimeService/StopContainer
	Sep 19 18:55:12 addons-971880 crio[951]: time="2024-09-19 18:55:12.475444606Z" level=info msg="Stopping pod sandbox: e5894e60a05af3aff389c6846844b41d27cea2d792666865f3d70549f6ca3b31" id=0df56b70-49f3-4436-931f-f2f51df9c393 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 19 18:55:12 addons-971880 crio[951]: time="2024-09-19 18:55:12.479253095Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-ZI3DBNHLBN6UTFYF - [0:0]\n:KUBE-HP-M4NOGRXLWCJKF5OZ - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-M4NOGRXLWCJKF5OZ\n-X KUBE-HP-ZI3DBNHLBN6UTFYF\nCOMMIT\n"
	Sep 19 18:55:12 addons-971880 crio[951]: time="2024-09-19 18:55:12.483672037Z" level=info msg="Closing host port tcp:80"
	Sep 19 18:55:12 addons-971880 crio[951]: time="2024-09-19 18:55:12.483728143Z" level=info msg="Closing host port tcp:443"
	Sep 19 18:55:12 addons-971880 crio[951]: time="2024-09-19 18:55:12.485248741Z" level=info msg="Host port tcp:80 does not have an open socket"
	Sep 19 18:55:12 addons-971880 crio[951]: time="2024-09-19 18:55:12.485296429Z" level=info msg="Host port tcp:443 does not have an open socket"
	Sep 19 18:55:12 addons-971880 crio[951]: time="2024-09-19 18:55:12.485515580Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-bc57996ff-nsltd Namespace:ingress-nginx ID:e5894e60a05af3aff389c6846844b41d27cea2d792666865f3d70549f6ca3b31 UID:a9e27002-ba35-4377-9a70-4d68a416f3bf NetNS:/var/run/netns/8ea8b6a5-d733-43ea-aaf0-9eee9955c0cd Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 19 18:55:12 addons-971880 crio[951]: time="2024-09-19 18:55:12.485652376Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-bc57996ff-nsltd from CNI network \"kindnet\" (type=ptp)"
	Sep 19 18:55:12 addons-971880 crio[951]: time="2024-09-19 18:55:12.517107602Z" level=info msg="Stopped pod sandbox: e5894e60a05af3aff389c6846844b41d27cea2d792666865f3d70549f6ca3b31" id=0df56b70-49f3-4436-931f-f2f51df9c393 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 19 18:55:12 addons-971880 crio[951]: time="2024-09-19 18:55:12.587511210Z" level=info msg="Removing container: d355220e4e1c0b8d48ac53159e748820425845697b09618d2bebb72ac41595da" id=9b9b6445-5aac-4543-a9e2-541a82790f10 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 19 18:55:12 addons-971880 crio[951]: time="2024-09-19 18:55:12.602665019Z" level=info msg="Removed container d355220e4e1c0b8d48ac53159e748820425845697b09618d2bebb72ac41595da: ingress-nginx/ingress-nginx-controller-bc57996ff-nsltd/controller" id=9b9b6445-5aac-4543-a9e2-541a82790f10 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	2d8a29047aa75       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        9 seconds ago       Running             hello-world-app            0                   ee31dc802b569       hello-world-app-55bf9c44b4-qvhwn
	cc1cce7cf558d       docker.io/library/nginx@sha256:19db381c08a95b2040d5637a65c7a59af6c2f21444b0c8730505280a0255fb53                              2 minutes ago       Running             nginx                      0                   ea55f29e4f901       nginx
	7e2229737603a       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69                 12 minutes ago      Running             gcp-auth                   0                   01e59bcb2da91       gcp-auth-89d5ffd79-8f6t2
	e4ece102ee198       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98             12 minutes ago      Running             local-path-provisioner     0                   567fd5373e217       local-path-provisioner-86d989889c-s9p2l
	c62dfc6e67af7       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                              12 minutes ago      Running             yakd                       0                   bb619c50918f0       yakd-dashboard-67d98fc6b-bfrtb
	719f56df5239d       nvcr.io/nvidia/k8s-device-plugin@sha256:cdd05f9d89f0552478d46474005e86b98795ad364664f644225b99d94978e680                     13 minutes ago      Running             nvidia-device-plugin-ctr   0                   386544d95701f       nvidia-device-plugin-daemonset-6b6sb
	b02fc97f9417e       420193b27261a8d37b9fb1faeed45094cefa47e72a7538fd5a6c05e8b5ce192e                                                             13 minutes ago      Exited              patch                      2                   a95ffb65bcca5       ingress-nginx-admission-patch-t7x2p
	e418333c9f79e       gcr.io/cloud-spanner-emulator/emulator@sha256:41ec188288c7943f488600462b2b74002814e52439be82d15de33c3ee4898a58               13 minutes ago      Running             cloud-spanner-emulator     0                   2d5041a3b3e10       cloud-spanner-emulator-769b77f747-wz2j4
	dd64b887fd1c5       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:7c4c1a6ca8855c524a64983eaf590e126a669ae12df83ad65de281c9beee13d3   13 minutes ago      Exited              create                     0                   aacb6089fc3b4       ingress-nginx-admission-create-7dt4w
	2211b84a8bcc0       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f        13 minutes ago      Running             metrics-server             0                   022f53b7544e5       metrics-server-84c5f94fbc-jrbzm
	645c6e1070b57       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             13 minutes ago      Running             storage-provisioner        0                   61ec5f92f3e97       storage-provisioner
	c57cc379e1c9a       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                             13 minutes ago      Running             coredns                    0                   2fb9e3187c953       coredns-7c65d6cfc9-lzshk
	dc4aa79f1b326       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d                                                             14 minutes ago      Running             kube-proxy                 0                   b43f35ceba531       kube-proxy-pf8wk
	dcda5994fb9da       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51                                                             14 minutes ago      Running             kindnet-cni                0                   874829284dbe9       kindnet-k2v8g
	4e8ba4e202807       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e                                                             14 minutes ago      Running             kube-controller-manager    0                   7ee5f4b8e79eb       kube-controller-manager-addons-971880
	d599c639765e1       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d                                                             14 minutes ago      Running             kube-scheduler             0                   a0d73f380837d       kube-scheduler-addons-971880
	a6739fa07ff39       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853                                                             14 minutes ago      Running             kube-apiserver             0                   92e7a9cf57f7c       kube-apiserver-addons-971880
	1a7797ceebe32       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                             14 minutes ago      Running             etcd                       0                   0a51e9c6a88a2       etcd-addons-971880
	
	
	==> coredns [c57cc379e1c9a8e90a13a5c1580cdc3da69b0f13cd6bd3c34135fb0ade3a0cf4] <==
	[INFO] 10.244.0.15:34202 - 56364 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000210347s
	[INFO] 10.244.0.15:58149 - 38892 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002341045s
	[INFO] 10.244.0.15:58149 - 27857 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002397331s
	[INFO] 10.244.0.15:49676 - 34537 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000107306s
	[INFO] 10.244.0.15:49676 - 13548 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000157021s
	[INFO] 10.244.0.15:57838 - 45202 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00012402s
	[INFO] 10.244.0.15:57838 - 669 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000179569s
	[INFO] 10.244.0.15:51630 - 63490 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000056164s
	[INFO] 10.244.0.15:37480 - 42395 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000051873s
	[INFO] 10.244.0.15:37480 - 26265 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000048607s
	[INFO] 10.244.0.15:51630 - 21823 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000084332s
	[INFO] 10.244.0.15:55956 - 23539 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001275642s
	[INFO] 10.244.0.15:55956 - 9713 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001339642s
	[INFO] 10.244.0.15:54413 - 50779 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000067175s
	[INFO] 10.244.0.15:54413 - 3672 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000064312s
	[INFO] 10.244.0.20:41195 - 28456 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00097449s
	[INFO] 10.244.0.20:38142 - 31604 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001120663s
	[INFO] 10.244.0.20:49823 - 61218 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000160804s
	[INFO] 10.244.0.20:46939 - 4524 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000127639s
	[INFO] 10.244.0.20:36103 - 53599 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00010249s
	[INFO] 10.244.0.20:55932 - 17378 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000129329s
	[INFO] 10.244.0.20:58542 - 47562 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002373504s
	[INFO] 10.244.0.20:41076 - 61778 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002174587s
	[INFO] 10.244.0.20:51892 - 37411 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001802296s
	[INFO] 10.244.0.20:53343 - 52840 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001659954s
	
	
	==> describe nodes <==
	Name:               addons-971880
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-971880
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=addons-971880
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_19T18_40_44_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-971880
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 18:40:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-971880
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 18:55:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 19 Sep 2024 18:53:20 +0000   Thu, 19 Sep 2024 18:40:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 19 Sep 2024 18:53:20 +0000   Thu, 19 Sep 2024 18:40:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 19 Sep 2024 18:53:20 +0000   Thu, 19 Sep 2024 18:40:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 19 Sep 2024 18:53:20 +0000   Thu, 19 Sep 2024 18:41:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-971880
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 bd7bc352662e4b16b74f8eda34921dfa
	  System UUID:                760732df-5c49-4c7a-baae-21e5ed371ca8
	  Boot ID:                    52db61fe-4049-4d60-8bc0-73f7fa38c59e
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     cloud-spanner-emulator-769b77f747-wz2j4    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  default                     hello-world-app-55bf9c44b4-qvhwn           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  gcp-auth                    gcp-auth-89d5ffd79-8f6t2                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 coredns-7c65d6cfc9-lzshk                   100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 etcd-addons-971880                         100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-k2v8g                              100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-addons-971880               250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-addons-971880      200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-pf8wk                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-addons-971880               100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 metrics-server-84c5f94fbc-jrbzm            100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         14m
	  kube-system                 nvidia-device-plugin-daemonset-6b6sb       0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  local-path-storage          local-path-provisioner-86d989889c-s9p2l    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-bfrtb             0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             548Mi (6%)  476Mi (6%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 14m                kube-proxy       
	  Normal   Starting                 14m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 14m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node addons-971880 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node addons-971880 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node addons-971880 status is now: NodeHasSufficientPID
	  Normal   Starting                 14m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 14m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  14m                kubelet          Node addons-971880 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m                kubelet          Node addons-971880 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m                kubelet          Node addons-971880 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           14m                node-controller  Node addons-971880 event: Registered Node addons-971880 in Controller
	  Normal   NodeReady                13m                kubelet          Node addons-971880 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep19 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014930] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.480178] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.743811] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.535974] kauditd_printk_skb: 36 callbacks suppressed
	[Sep19 17:29] hrtimer: interrupt took 7222366 ns
	[Sep19 17:52] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [1a7797ceebe32b244c5f1a224638afd9f24c176c6cc7d597addad656dde2d48b] <==
	{"level":"info","ts":"2024-09-19T18:40:38.476120Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-19T18:40:38.476422Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-19T18:40:38.476471Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-19T18:40:38.476754Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-19T18:40:38.477130Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-19T18:40:38.477659Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-19T18:40:38.477789Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-19T18:40:38.477860Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-19T18:40:38.477887Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-19T18:40:38.478195Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"warn","ts":"2024-09-19T18:40:49.175324Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"232.946086ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2024-09-19T18:40:49.175485Z","caller":"traceutil/trace.go:171","msg":"trace[830918283] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:342; }","duration":"233.225348ms","start":"2024-09-19T18:40:48.942248Z","end":"2024-09-19T18:40:49.175473Z","steps":["trace[830918283] 'range keys from in-memory index tree'  (duration: 232.868524ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T18:40:51.797488Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.274026ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kube-proxy\" ","response":"range_response_count:1 size:2878"}
	{"level":"info","ts":"2024-09-19T18:40:51.797733Z","caller":"traceutil/trace.go:171","msg":"trace[1182424122] range","detail":"{range_begin:/registry/daemonsets/kube-system/kube-proxy; range_end:; response_count:1; response_revision:370; }","duration":"109.630686ms","start":"2024-09-19T18:40:51.688089Z","end":"2024-09-19T18:40:51.797719Z","steps":["trace[1182424122] 'range keys from in-memory index tree'  (duration: 108.949628ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T18:40:51.921159Z","caller":"traceutil/trace.go:171","msg":"trace[533958715] linearizableReadLoop","detail":"{readStateIndex:381; appliedIndex:380; }","duration":"112.097711ms","start":"2024-09-19T18:40:51.809047Z","end":"2024-09-19T18:40:51.921145Z","steps":["trace[533958715] 'read index received'  (duration: 41.359334ms)","trace[533958715] 'applied index is now lower than readState.Index'  (duration: 70.737803ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-19T18:40:51.921521Z","caller":"traceutil/trace.go:171","msg":"trace[119087165] transaction","detail":"{read_only:false; response_revision:371; number_of_response:1; }","duration":"208.072803ms","start":"2024-09-19T18:40:51.713438Z","end":"2024-09-19T18:40:51.921511Z","steps":["trace[119087165] 'process raft request'  (duration: 207.578674ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T18:40:51.934289Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.615898ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-09-19T18:40:51.934428Z","caller":"traceutil/trace.go:171","msg":"trace[1898066912] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:371; }","duration":"125.3687ms","start":"2024-09-19T18:40:51.809041Z","end":"2024-09-19T18:40:51.934410Z","steps":["trace[1898066912] 'agreement among raft nodes before linearized reading'  (duration: 112.594212ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T18:40:51.947086Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.812529ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-09-19T18:40:51.947245Z","caller":"traceutil/trace.go:171","msg":"trace[273625168] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:378; }","duration":"137.981505ms","start":"2024-09-19T18:40:51.809251Z","end":"2024-09-19T18:40:51.947233Z","steps":["trace[273625168] 'agreement among raft nodes before linearized reading'  (duration: 137.773891ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T18:40:52.209425Z","caller":"traceutil/trace.go:171","msg":"trace[74668238] transaction","detail":"{read_only:false; response_revision:385; number_of_response:1; }","duration":"112.887175ms","start":"2024-09-19T18:40:52.096520Z","end":"2024-09-19T18:40:52.209407Z","steps":["trace[74668238] 'process raft request'  (duration: 103.152266ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T18:40:52.740865Z","caller":"traceutil/trace.go:171","msg":"trace[748612513] transaction","detail":"{read_only:false; response_revision:430; number_of_response:1; }","duration":"103.493878ms","start":"2024-09-19T18:40:52.637355Z","end":"2024-09-19T18:40:52.740849Z","steps":["trace[748612513] 'process raft request'  (duration: 24.762129ms)","trace[748612513] 'compare'  (duration: 78.348669ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-19T18:50:38.544916Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1551}
	{"level":"info","ts":"2024-09-19T18:50:38.573319Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1551,"took":"27.857921ms","hash":3037137597,"current-db-size-bytes":6537216,"current-db-size":"6.5 MB","current-db-size-in-use-bytes":3432448,"current-db-size-in-use":"3.4 MB"}
	{"level":"info","ts":"2024-09-19T18:50:38.573371Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3037137597,"revision":1551,"compact-revision":-1}
	
	
	==> gcp-auth [7e2229737603afbb0dacc6d3df819da59af22f172e365f53a2f81a5439c8bcc4] <==
	2024/09/19 18:42:34 GCP Auth Webhook started!
	2024/09/19 18:43:17 Ready to marshal response ...
	2024/09/19 18:43:17 Ready to write response ...
	2024/09/19 18:43:18 Ready to marshal response ...
	2024/09/19 18:43:18 Ready to write response ...
	2024/09/19 18:43:18 Ready to marshal response ...
	2024/09/19 18:43:18 Ready to write response ...
	2024/09/19 18:51:32 Ready to marshal response ...
	2024/09/19 18:51:32 Ready to write response ...
	2024/09/19 18:51:38 Ready to marshal response ...
	2024/09/19 18:51:38 Ready to write response ...
	2024/09/19 18:52:02 Ready to marshal response ...
	2024/09/19 18:52:02 Ready to write response ...
	2024/09/19 18:52:48 Ready to marshal response ...
	2024/09/19 18:52:48 Ready to write response ...
	2024/09/19 18:55:06 Ready to marshal response ...
	2024/09/19 18:55:06 Ready to write response ...
	
	
	==> kernel <==
	 18:55:17 up  2:37,  0 users,  load average: 0.07, 0.29, 0.69
	Linux addons-971880 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [dcda5994fb9dad3bdb5086f8926c508213459cae9deb54923f4dffd2852f1b82] <==
	I0919 18:53:12.713708       1 main.go:299] handling current node
	I0919 18:53:22.713692       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:53:22.713828       1 main.go:299] handling current node
	I0919 18:53:32.721766       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:53:32.721803       1 main.go:299] handling current node
	I0919 18:53:42.713664       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:53:42.713806       1 main.go:299] handling current node
	I0919 18:53:52.713750       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:53:52.713783       1 main.go:299] handling current node
	I0919 18:54:02.720203       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:54:02.720237       1 main.go:299] handling current node
	I0919 18:54:12.714682       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:54:12.714800       1 main.go:299] handling current node
	I0919 18:54:22.718930       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:54:22.719099       1 main.go:299] handling current node
	I0919 18:54:32.721473       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:54:32.721516       1 main.go:299] handling current node
	I0919 18:54:42.713651       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:54:42.713687       1 main.go:299] handling current node
	I0919 18:54:52.714549       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:54:52.714662       1 main.go:299] handling current node
	I0919 18:55:02.721564       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:55:02.721606       1 main.go:299] handling current node
	I0919 18:55:12.713656       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:55:12.713690       1 main.go:299] handling current node
	
	
	==> kube-apiserver [a6739fa07ff39070e523d4cee920e861b82fa4bd2f9be9a5c18fec0b8df87a99] <==
	E0919 18:42:43.624132       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0919 18:42:43.683381       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0919 18:51:49.284184       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0919 18:51:51.111893       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I0919 18:52:18.872919       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 18:52:18.873058       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0919 18:52:18.898841       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 18:52:18.898895       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0919 18:52:18.939986       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 18:52:18.940228       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0919 18:52:18.978013       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 18:52:18.978127       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0919 18:52:19.011407       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 18:52:19.011546       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0919 18:52:19.978902       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0919 18:52:20.012417       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0919 18:52:20.066661       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0919 18:52:42.462108       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0919 18:52:43.587477       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0919 18:52:48.074937       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0919 18:52:48.382732       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.199.165"}
	I0919 18:55:06.732560       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.4.85"}
	
	
	==> kube-controller-manager [4e8ba4e202807048a02e5bf0c0ced036a2a964bcaf7a91f53bf6080712052341] <==
	W0919 18:54:08.582395       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:54:08.582443       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:54:20.290314       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:54:20.290358       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:54:21.153118       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:54:21.153165       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:54:37.157598       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:54:37.157640       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:54:44.060838       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:54:44.060881       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:54:54.905160       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:54:54.905203       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0919 18:55:06.494396       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="56.496592ms"
	I0919 18:55:06.510053       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="12.05448ms"
	I0919 18:55:06.510219       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="45.079µs"
	I0919 18:55:06.513703       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="49.018µs"
	I0919 18:55:08.606762       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="10.825968ms"
	I0919 18:55:08.607642       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="575.441µs"
	W0919 18:55:08.950962       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:55:08.951010       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0919 18:55:09.299669       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0919 18:55:09.304657       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="6.064µs"
	I0919 18:55:09.306639       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	W0919 18:55:13.392774       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:55:13.392820       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [dc4aa79f1b326b392f0966ca9b9067fb674897d417ae8cf8b7dbef4fe5de203a] <==
	I0919 18:40:52.838027       1 server_linux.go:66] "Using iptables proxy"
	I0919 18:40:53.554142       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0919 18:40:53.554262       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 18:40:53.934955       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 18:40:53.935024       1 server_linux.go:169] "Using iptables Proxier"
	I0919 18:40:53.938053       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 18:40:53.938361       1 server.go:483] "Version info" version="v1.31.1"
	I0919 18:40:53.938589       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 18:40:53.939672       1 config.go:199] "Starting service config controller"
	I0919 18:40:53.939717       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0919 18:40:53.939750       1 config.go:105] "Starting endpoint slice config controller"
	I0919 18:40:53.939765       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0919 18:40:53.940395       1 config.go:328] "Starting node config controller"
	I0919 18:40:53.940414       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0919 18:40:54.042662       1 shared_informer.go:320] Caches are synced for node config
	I0919 18:40:54.056362       1 shared_informer.go:320] Caches are synced for service config
	I0919 18:40:54.056393       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [d599c639765e1953306d0695fea53619d96374405a473fbb100111dab8d56569] <==
	W0919 18:40:41.910847       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0919 18:40:41.910906       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:41.911000       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0919 18:40:41.911041       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:41.911123       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0919 18:40:41.911163       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:41.916391       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0919 18:40:41.916611       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:41.916751       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0919 18:40:41.916806       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:41.916886       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0919 18:40:41.916941       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:41.917018       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0919 18:40:41.917057       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:41.917131       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0919 18:40:41.917171       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:41.917272       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0919 18:40:41.917314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:41.917390       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0919 18:40:41.917429       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:41.917507       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0919 18:40:41.917859       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:41.918004       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0919 18:40:41.918067       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0919 18:40:43.005131       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 19 18:55:07 addons-971880 kubelet[1465]: I0919 18:55:07.879783    1465 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j29g8\" (UniqueName: \"kubernetes.io/projected/afb5e949-2f5b-462a-89a2-809679640b8d-kube-api-access-j29g8\") pod \"afb5e949-2f5b-462a-89a2-809679640b8d\" (UID: \"afb5e949-2f5b-462a-89a2-809679640b8d\") "
	Sep 19 18:55:07 addons-971880 kubelet[1465]: I0919 18:55:07.887749    1465 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afb5e949-2f5b-462a-89a2-809679640b8d-kube-api-access-j29g8" (OuterVolumeSpecName: "kube-api-access-j29g8") pod "afb5e949-2f5b-462a-89a2-809679640b8d" (UID: "afb5e949-2f5b-462a-89a2-809679640b8d"). InnerVolumeSpecName "kube-api-access-j29g8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 19 18:55:07 addons-971880 kubelet[1465]: I0919 18:55:07.980766    1465 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-j29g8\" (UniqueName: \"kubernetes.io/projected/afb5e949-2f5b-462a-89a2-809679640b8d-kube-api-access-j29g8\") on node \"addons-971880\" DevicePath \"\""
	Sep 19 18:55:08 addons-971880 kubelet[1465]: I0919 18:55:08.573848    1465 scope.go:117] "RemoveContainer" containerID="0cc2661b051ab76c8837f265b2e201e10cd5f5427d89155148adc8b2051065bc"
	Sep 19 18:55:08 addons-971880 kubelet[1465]: I0919 18:55:08.599820    1465 scope.go:117] "RemoveContainer" containerID="0cc2661b051ab76c8837f265b2e201e10cd5f5427d89155148adc8b2051065bc"
	Sep 19 18:55:08 addons-971880 kubelet[1465]: E0919 18:55:08.600312    1465 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cc2661b051ab76c8837f265b2e201e10cd5f5427d89155148adc8b2051065bc\": container with ID starting with 0cc2661b051ab76c8837f265b2e201e10cd5f5427d89155148adc8b2051065bc not found: ID does not exist" containerID="0cc2661b051ab76c8837f265b2e201e10cd5f5427d89155148adc8b2051065bc"
	Sep 19 18:55:08 addons-971880 kubelet[1465]: I0919 18:55:08.600359    1465 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cc2661b051ab76c8837f265b2e201e10cd5f5427d89155148adc8b2051065bc"} err="failed to get container status \"0cc2661b051ab76c8837f265b2e201e10cd5f5427d89155148adc8b2051065bc\": rpc error: code = NotFound desc = could not find container \"0cc2661b051ab76c8837f265b2e201e10cd5f5427d89155148adc8b2051065bc\": container with ID starting with 0cc2661b051ab76c8837f265b2e201e10cd5f5427d89155148adc8b2051065bc not found: ID does not exist"
	Sep 19 18:55:08 addons-971880 kubelet[1465]: I0919 18:55:08.621711    1465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-qvhwn" podStartSLOduration=1.619224264 podStartE2EDuration="2.621691435s" podCreationTimestamp="2024-09-19 18:55:06 +0000 UTC" firstStartedPulling="2024-09-19 18:55:06.857010077 +0000 UTC m=+863.156879118" lastFinishedPulling="2024-09-19 18:55:07.859477247 +0000 UTC m=+864.159346289" observedRunningTime="2024-09-19 18:55:08.596835957 +0000 UTC m=+864.896705007" watchObservedRunningTime="2024-09-19 18:55:08.621691435 +0000 UTC m=+864.921560477"
	Sep 19 18:55:09 addons-971880 kubelet[1465]: I0919 18:55:09.834078    1465 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="763a9e4e-a288-44fd-b00b-2757b2c6944a" path="/var/lib/kubelet/pods/763a9e4e-a288-44fd-b00b-2757b2c6944a/volumes"
	Sep 19 18:55:09 addons-971880 kubelet[1465]: I0919 18:55:09.834527    1465 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afb5e949-2f5b-462a-89a2-809679640b8d" path="/var/lib/kubelet/pods/afb5e949-2f5b-462a-89a2-809679640b8d/volumes"
	Sep 19 18:55:09 addons-971880 kubelet[1465]: I0919 18:55:09.834879    1465 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8dca634-2d09-44bf-a3f5-cd527238b421" path="/var/lib/kubelet/pods/f8dca634-2d09-44bf-a3f5-cd527238b421/volumes"
	Sep 19 18:55:11 addons-971880 kubelet[1465]: E0919 18:55:11.833207    1465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="4f103fbd-06db-4d16-a162-93cbfb48a68e"
	Sep 19 18:55:12 addons-971880 kubelet[1465]: I0919 18:55:12.586283    1465 scope.go:117] "RemoveContainer" containerID="d355220e4e1c0b8d48ac53159e748820425845697b09618d2bebb72ac41595da"
	Sep 19 18:55:12 addons-971880 kubelet[1465]: I0919 18:55:12.603031    1465 scope.go:117] "RemoveContainer" containerID="d355220e4e1c0b8d48ac53159e748820425845697b09618d2bebb72ac41595da"
	Sep 19 18:55:12 addons-971880 kubelet[1465]: E0919 18:55:12.603406    1465 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d355220e4e1c0b8d48ac53159e748820425845697b09618d2bebb72ac41595da\": container with ID starting with d355220e4e1c0b8d48ac53159e748820425845697b09618d2bebb72ac41595da not found: ID does not exist" containerID="d355220e4e1c0b8d48ac53159e748820425845697b09618d2bebb72ac41595da"
	Sep 19 18:55:12 addons-971880 kubelet[1465]: I0919 18:55:12.603446    1465 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d355220e4e1c0b8d48ac53159e748820425845697b09618d2bebb72ac41595da"} err="failed to get container status \"d355220e4e1c0b8d48ac53159e748820425845697b09618d2bebb72ac41595da\": rpc error: code = NotFound desc = could not find container \"d355220e4e1c0b8d48ac53159e748820425845697b09618d2bebb72ac41595da\": container with ID starting with d355220e4e1c0b8d48ac53159e748820425845697b09618d2bebb72ac41595da not found: ID does not exist"
	Sep 19 18:55:12 addons-971880 kubelet[1465]: I0919 18:55:12.616791    1465 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k85zw\" (UniqueName: \"kubernetes.io/projected/a9e27002-ba35-4377-9a70-4d68a416f3bf-kube-api-access-k85zw\") pod \"a9e27002-ba35-4377-9a70-4d68a416f3bf\" (UID: \"a9e27002-ba35-4377-9a70-4d68a416f3bf\") "
	Sep 19 18:55:12 addons-971880 kubelet[1465]: I0919 18:55:12.616869    1465 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a9e27002-ba35-4377-9a70-4d68a416f3bf-webhook-cert\") pod \"a9e27002-ba35-4377-9a70-4d68a416f3bf\" (UID: \"a9e27002-ba35-4377-9a70-4d68a416f3bf\") "
	Sep 19 18:55:12 addons-971880 kubelet[1465]: I0919 18:55:12.618808    1465 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9e27002-ba35-4377-9a70-4d68a416f3bf-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a9e27002-ba35-4377-9a70-4d68a416f3bf" (UID: "a9e27002-ba35-4377-9a70-4d68a416f3bf"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 19 18:55:12 addons-971880 kubelet[1465]: I0919 18:55:12.620901    1465 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9e27002-ba35-4377-9a70-4d68a416f3bf-kube-api-access-k85zw" (OuterVolumeSpecName: "kube-api-access-k85zw") pod "a9e27002-ba35-4377-9a70-4d68a416f3bf" (UID: "a9e27002-ba35-4377-9a70-4d68a416f3bf"). InnerVolumeSpecName "kube-api-access-k85zw". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 19 18:55:12 addons-971880 kubelet[1465]: I0919 18:55:12.718139    1465 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-k85zw\" (UniqueName: \"kubernetes.io/projected/a9e27002-ba35-4377-9a70-4d68a416f3bf-kube-api-access-k85zw\") on node \"addons-971880\" DevicePath \"\""
	Sep 19 18:55:12 addons-971880 kubelet[1465]: I0919 18:55:12.718180    1465 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a9e27002-ba35-4377-9a70-4d68a416f3bf-webhook-cert\") on node \"addons-971880\" DevicePath \"\""
	Sep 19 18:55:13 addons-971880 kubelet[1465]: I0919 18:55:13.833820    1465 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9e27002-ba35-4377-9a70-4d68a416f3bf" path="/var/lib/kubelet/pods/a9e27002-ba35-4377-9a70-4d68a416f3bf/volumes"
	Sep 19 18:55:14 addons-971880 kubelet[1465]: E0919 18:55:14.169001    1465 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772114168759994,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:529271,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 18:55:14 addons-971880 kubelet[1465]: E0919 18:55:14.169033    1465 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772114168759994,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:529271,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [645c6e1070b57c423d66af2e3d6e057cece2b42bc10fd145e4e32e7603750853] <==
	I0919 18:41:34.075595       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0919 18:41:34.089415       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0919 18:41:34.089614       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0919 18:41:34.099519       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0919 18:41:34.099789       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-971880_54ef0796-8bc7-4c84-86c5-8ebc4c91b26f!
	I0919 18:41:34.100759       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f411cc94-3279-4140-8a35-80322ca09e0a", APIVersion:"v1", ResourceVersion:"930", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-971880_54ef0796-8bc7-4c84-86c5-8ebc4c91b26f became leader
	I0919 18:41:34.201066       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-971880_54ef0796-8bc7-4c84-86c5-8ebc4c91b26f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-971880 -n addons-971880
helpers_test.go:261: (dbg) Run:  kubectl --context addons-971880 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-971880 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-971880 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-971880/192.168.49.2
	Start Time:       Thu, 19 Sep 2024 18:43:18 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.21
	IPs:
	  IP:  10.244.0.21
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w22nf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-w22nf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  12m                  default-scheduler  Successfully assigned default/busybox to addons-971880
	  Normal   Pulling    10m (x4 over 12m)    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     10m (x4 over 12m)    kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     10m (x4 over 12m)    kubelet            Error: ErrImagePull
	  Warning  Failed     10m (x6 over 12m)    kubelet            Error: ImagePullBackOff
	  Normal   BackOff    114s (x43 over 12m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (151.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (327.48s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.858081ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-jrbzm" [4dcd9c96-80a7-42f2-86ca-69d052a20c31] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00606245s
addons_test.go:417: (dbg) Run:  kubectl --context addons-971880 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-971880 top pods -n kube-system: exit status 1 (97.562699ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-lzshk, age: 11m36.401759706s

                                                
                                                
** /stderr **
I0919 18:52:25.405590  292666 retry.go:31] will retry after 1.893988555s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-971880 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-971880 top pods -n kube-system: exit status 1 (99.013537ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-lzshk, age: 11m38.396238969s

                                                
                                                
** /stderr **
I0919 18:52:27.399170  292666 retry.go:31] will retry after 4.66927253s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-971880 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-971880 top pods -n kube-system: exit status 1 (121.718822ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-lzshk, age: 11m43.186460922s

                                                
                                                
** /stderr **
I0919 18:52:32.190520  292666 retry.go:31] will retry after 6.608448647s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-971880 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-971880 top pods -n kube-system: exit status 1 (107.258617ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-lzshk, age: 11m49.903405673s

                                                
                                                
** /stderr **
I0919 18:52:38.906571  292666 retry.go:31] will retry after 9.582732907s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-971880 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-971880 top pods -n kube-system: exit status 1 (96.833353ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-lzshk, age: 11m59.583382096s

                                                
                                                
** /stderr **
I0919 18:52:48.587080  292666 retry.go:31] will retry after 14.098215583s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-971880 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-971880 top pods -n kube-system: exit status 1 (88.553648ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-lzshk, age: 12m13.774516773s

                                                
                                                
** /stderr **
I0919 18:53:02.777216  292666 retry.go:31] will retry after 24.527511789s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-971880 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-971880 top pods -n kube-system: exit status 1 (91.097178ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-lzshk, age: 12m38.393681257s

                                                
                                                
** /stderr **
I0919 18:53:27.396593  292666 retry.go:31] will retry after 22.574964982s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-971880 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-971880 top pods -n kube-system: exit status 1 (102.85364ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-lzshk, age: 13m1.072264685s

                                                
                                                
** /stderr **
I0919 18:53:50.075264  292666 retry.go:31] will retry after 1m3.291412088s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-971880 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-971880 top pods -n kube-system: exit status 1 (91.105859ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-lzshk, age: 14m4.454322704s

                                                
                                                
** /stderr **
I0919 18:54:53.458122  292666 retry.go:31] will retry after 32.456081382s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-971880 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-971880 top pods -n kube-system: exit status 1 (93.434505ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-lzshk, age: 14m37.003311096s

                                                
                                                
** /stderr **
I0919 18:55:26.007988  292666 retry.go:31] will retry after 1m13.037340409s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-971880 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-971880 top pods -n kube-system: exit status 1 (85.062221ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-lzshk, age: 15m50.128543181s

                                                
                                                
** /stderr **
I0919 18:56:39.131178  292666 retry.go:31] will retry after 1m4.00000313s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-971880 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-971880 top pods -n kube-system: exit status 1 (91.571549ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-lzshk, age: 16m54.223408357s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-971880 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-971880
helpers_test.go:235: (dbg) docker inspect addons-971880:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "656ffd17b558f4d4a2b9c0de0ee5ab8e55fb64f478d3837e93c1d9738183d057",
	        "Created": "2024-09-19T18:40:21.693648884Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 294019,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-19T18:40:21.83370316Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f8be4f9f9351784955e36c0e64d55ad19451839d9f6d0c057285eb8f9072963b",
	        "ResolvConfPath": "/var/lib/docker/containers/656ffd17b558f4d4a2b9c0de0ee5ab8e55fb64f478d3837e93c1d9738183d057/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/656ffd17b558f4d4a2b9c0de0ee5ab8e55fb64f478d3837e93c1d9738183d057/hostname",
	        "HostsPath": "/var/lib/docker/containers/656ffd17b558f4d4a2b9c0de0ee5ab8e55fb64f478d3837e93c1d9738183d057/hosts",
	        "LogPath": "/var/lib/docker/containers/656ffd17b558f4d4a2b9c0de0ee5ab8e55fb64f478d3837e93c1d9738183d057/656ffd17b558f4d4a2b9c0de0ee5ab8e55fb64f478d3837e93c1d9738183d057-json.log",
	        "Name": "/addons-971880",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-971880:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-971880",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a5a48ba09a5069601d04fdb6696ee56985bb23840413df00b3b65bd12d552997-init/diff:/var/lib/docker/overlay2/01d9e9e08c815432b8994f686c30467e8ad0d2e87cf6790233377a53c691e8f4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a5a48ba09a5069601d04fdb6696ee56985bb23840413df00b3b65bd12d552997/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a5a48ba09a5069601d04fdb6696ee56985bb23840413df00b3b65bd12d552997/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a5a48ba09a5069601d04fdb6696ee56985bb23840413df00b3b65bd12d552997/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-971880",
	                "Source": "/var/lib/docker/volumes/addons-971880/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-971880",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-971880",
	                "name.minikube.sigs.k8s.io": "addons-971880",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8401fb271cde0fae79ea1c883e095a5f34d887cc56bfc81485e9925601a92a9a",
	            "SandboxKey": "/var/run/docker/netns/8401fb271cde",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-971880": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "d62f700a78daed261ed14f4bb32a66890d0b280b5d5a72af727d194426d28141",
	                    "EndpointID": "e792600fa39aac0b873f2e9aacc195668339c4f184c5b304571be40ad512fdb9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-971880",
	                        "656ffd17b558"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-971880 -n addons-971880
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-971880 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-971880 logs -n 25: (1.773361555s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-217912                                                                     | download-only-217912   | jenkins | v1.34.0 | 19 Sep 24 18:40 UTC | 19 Sep 24 18:40 UTC |
	| start   | --download-only -p                                                                          | download-docker-592744 | jenkins | v1.34.0 | 19 Sep 24 18:40 UTC |                     |
	|         | download-docker-592744                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-592744                                                                   | download-docker-592744 | jenkins | v1.34.0 | 19 Sep 24 18:40 UTC | 19 Sep 24 18:40 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-388144   | jenkins | v1.34.0 | 19 Sep 24 18:40 UTC |                     |
	|         | binary-mirror-388144                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:33855                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-388144                                                                     | binary-mirror-388144   | jenkins | v1.34.0 | 19 Sep 24 18:40 UTC | 19 Sep 24 18:40 UTC |
	| addons  | enable dashboard -p                                                                         | addons-971880          | jenkins | v1.34.0 | 19 Sep 24 18:40 UTC |                     |
	|         | addons-971880                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-971880          | jenkins | v1.34.0 | 19 Sep 24 18:40 UTC |                     |
	|         | addons-971880                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-971880 --wait=true                                                                | addons-971880          | jenkins | v1.34.0 | 19 Sep 24 18:40 UTC | 19 Sep 24 18:43 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-971880 addons                                                                        | addons-971880          | jenkins | v1.34.0 | 19 Sep 24 18:52 UTC | 19 Sep 24 18:52 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-971880 addons                                                                        | addons-971880          | jenkins | v1.34.0 | 19 Sep 24 18:52 UTC | 19 Sep 24 18:52 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-971880 ip                                                                            | addons-971880          | jenkins | v1.34.0 | 19 Sep 24 18:52 UTC | 19 Sep 24 18:52 UTC |
	| addons  | addons-971880 addons disable                                                                | addons-971880          | jenkins | v1.34.0 | 19 Sep 24 18:52 UTC | 19 Sep 24 18:52 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-971880          | jenkins | v1.34.0 | 19 Sep 24 18:52 UTC | 19 Sep 24 18:52 UTC |
	|         | addons-971880                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-971880 ssh curl -s                                                                   | addons-971880          | jenkins | v1.34.0 | 19 Sep 24 18:52 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-971880 ip                                                                            | addons-971880          | jenkins | v1.34.0 | 19 Sep 24 18:55 UTC | 19 Sep 24 18:55 UTC |
	| addons  | addons-971880 addons disable                                                                | addons-971880          | jenkins | v1.34.0 | 19 Sep 24 18:55 UTC | 19 Sep 24 18:55 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-971880 addons disable                                                                | addons-971880          | jenkins | v1.34.0 | 19 Sep 24 18:55 UTC | 19 Sep 24 18:55 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| ssh     | addons-971880 ssh cat                                                                       | addons-971880          | jenkins | v1.34.0 | 19 Sep 24 18:55 UTC | 19 Sep 24 18:55 UTC |
	|         | /opt/local-path-provisioner/pvc-3d62cb7c-5cd6-47f0-b923-9d3114eaf026_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-971880 addons disable                                                                | addons-971880          | jenkins | v1.34.0 | 19 Sep 24 18:55 UTC | 19 Sep 24 18:55 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-971880 addons disable                                                                | addons-971880          | jenkins | v1.34.0 | 19 Sep 24 18:55 UTC | 19 Sep 24 18:55 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-971880          | jenkins | v1.34.0 | 19 Sep 24 18:55 UTC | 19 Sep 24 18:55 UTC |
	|         | -p addons-971880                                                                            |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-971880          | jenkins | v1.34.0 | 19 Sep 24 18:55 UTC | 19 Sep 24 18:55 UTC |
	|         | addons-971880                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-971880          | jenkins | v1.34.0 | 19 Sep 24 18:55 UTC | 19 Sep 24 18:55 UTC |
	|         | -p addons-971880                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-971880 addons disable                                                                | addons-971880          | jenkins | v1.34.0 | 19 Sep 24 18:56 UTC | 19 Sep 24 18:56 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-971880 addons                                                                        | addons-971880          | jenkins | v1.34.0 | 19 Sep 24 18:57 UTC | 19 Sep 24 18:57 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/19 18:40:14
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 18:40:14.795022  293537 out.go:345] Setting OutFile to fd 1 ...
	I0919 18:40:14.795209  293537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:40:14.795239  293537 out.go:358] Setting ErrFile to fd 2...
	I0919 18:40:14.795263  293537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:40:14.795520  293537 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-287261/.minikube/bin
	I0919 18:40:14.796051  293537 out.go:352] Setting JSON to false
	I0919 18:40:14.796950  293537 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8547,"bootTime":1726762668,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0919 18:40:14.797050  293537 start.go:139] virtualization:  
	I0919 18:40:14.799511  293537 out.go:177] * [addons-971880] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0919 18:40:14.802404  293537 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 18:40:14.802594  293537 notify.go:220] Checking for updates...
	I0919 18:40:14.806697  293537 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 18:40:14.809013  293537 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19664-287261/kubeconfig
	I0919 18:40:14.810889  293537 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-287261/.minikube
	I0919 18:40:14.813452  293537 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0919 18:40:14.815382  293537 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 18:40:14.817599  293537 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 18:40:14.840916  293537 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0919 18:40:14.841034  293537 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 18:40:14.895857  293537 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:54 SystemTime:2024-09-19 18:40:14.88564199 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0919 18:40:14.895981  293537 docker.go:318] overlay module found
	I0919 18:40:14.898681  293537 out.go:177] * Using the docker driver based on user configuration
	I0919 18:40:14.900591  293537 start.go:297] selected driver: docker
	I0919 18:40:14.900609  293537 start.go:901] validating driver "docker" against <nil>
	I0919 18:40:14.900622  293537 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 18:40:14.901261  293537 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 18:40:14.949650  293537 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:54 SystemTime:2024-09-19 18:40:14.940202371 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0919 18:40:14.949868  293537 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 18:40:14.950096  293537 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 18:40:14.952238  293537 out.go:177] * Using Docker driver with root privileges
	I0919 18:40:14.954169  293537 cni.go:84] Creating CNI manager for ""
	I0919 18:40:14.954244  293537 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 18:40:14.954258  293537 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 18:40:14.954352  293537 start.go:340] cluster config:
	{Name:addons-971880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-971880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 18:40:14.957664  293537 out.go:177] * Starting "addons-971880" primary control-plane node in "addons-971880" cluster
	I0919 18:40:14.959288  293537 cache.go:121] Beginning downloading kic base image for docker with crio
	I0919 18:40:14.961126  293537 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0919 18:40:14.962695  293537 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 18:40:14.962751  293537 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19664-287261/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I0919 18:40:14.962778  293537 cache.go:56] Caching tarball of preloaded images
	I0919 18:40:14.962775  293537 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0919 18:40:14.962860  293537 preload.go:172] Found /home/jenkins/minikube-integration/19664-287261/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0919 18:40:14.962870  293537 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0919 18:40:14.963218  293537 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/config.json ...
	I0919 18:40:14.963237  293537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/config.json: {Name:mkdcb27e8211740d95283674cbbbe61d3cf7cd3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:14.982197  293537 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon, skipping pull
	I0919 18:40:14.982222  293537 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in daemon, skipping load
	I0919 18:40:14.982238  293537 cache.go:194] Successfully downloaded all kic artifacts
	I0919 18:40:14.982271  293537 start.go:360] acquireMachinesLock for addons-971880: {Name:mk9a87d1a88ed96332d84a90b344d67278fbcfbb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 18:40:14.982383  293537 start.go:364] duration metric: took 90.97µs to acquireMachinesLock for "addons-971880"
	I0919 18:40:14.982415  293537 start.go:93] Provisioning new machine with config: &{Name:addons-971880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-971880 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 18:40:14.982485  293537 start.go:125] createHost starting for "" (driver="docker")
	I0919 18:40:14.985182  293537 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0919 18:40:14.985446  293537 start.go:159] libmachine.API.Create for "addons-971880" (driver="docker")
	I0919 18:40:14.985494  293537 client.go:168] LocalClient.Create starting
	I0919 18:40:14.985608  293537 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca.pem
	I0919 18:40:15.651179  293537 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/cert.pem
	I0919 18:40:16.244767  293537 cli_runner.go:164] Run: docker network inspect addons-971880 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 18:40:16.259573  293537 cli_runner.go:211] docker network inspect addons-971880 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 18:40:16.259663  293537 network_create.go:284] running [docker network inspect addons-971880] to gather additional debugging logs...
	I0919 18:40:16.259686  293537 cli_runner.go:164] Run: docker network inspect addons-971880
	W0919 18:40:16.278892  293537 cli_runner.go:211] docker network inspect addons-971880 returned with exit code 1
	I0919 18:40:16.278928  293537 network_create.go:287] error running [docker network inspect addons-971880]: docker network inspect addons-971880: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-971880 not found
	I0919 18:40:16.278941  293537 network_create.go:289] output of [docker network inspect addons-971880]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-971880 not found
	
	** /stderr **
	I0919 18:40:16.279047  293537 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 18:40:16.293226  293537 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001753420}
	I0919 18:40:16.293268  293537 network_create.go:124] attempt to create docker network addons-971880 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0919 18:40:16.293334  293537 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-971880 addons-971880
	I0919 18:40:16.363897  293537 network_create.go:108] docker network addons-971880 192.168.49.0/24 created
	I0919 18:40:16.363930  293537 kic.go:121] calculated static IP "192.168.49.2" for the "addons-971880" container
	I0919 18:40:16.364004  293537 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 18:40:16.380178  293537 cli_runner.go:164] Run: docker volume create addons-971880 --label name.minikube.sigs.k8s.io=addons-971880 --label created_by.minikube.sigs.k8s.io=true
	I0919 18:40:16.395244  293537 oci.go:103] Successfully created a docker volume addons-971880
	I0919 18:40:16.395327  293537 cli_runner.go:164] Run: docker run --rm --name addons-971880-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-971880 --entrypoint /usr/bin/test -v addons-971880:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib
	I0919 18:40:17.535557  293537 cli_runner.go:217] Completed: docker run --rm --name addons-971880-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-971880 --entrypoint /usr/bin/test -v addons-971880:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib: (1.140188795s)
	I0919 18:40:17.535586  293537 oci.go:107] Successfully prepared a docker volume addons-971880
	I0919 18:40:17.535611  293537 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 18:40:17.535632  293537 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 18:40:17.535690  293537 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19664-287261/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-971880:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 18:40:21.621921  293537 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19664-287261/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-971880:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir: (4.086185357s)
	I0919 18:40:21.621955  293537 kic.go:203] duration metric: took 4.086318543s to extract preloaded images to volume ...
	W0919 18:40:21.622102  293537 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0919 18:40:21.622210  293537 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 18:40:21.679227  293537 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-971880 --name addons-971880 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-971880 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-971880 --network addons-971880 --ip 192.168.49.2 --volume addons-971880:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4
	I0919 18:40:22.007220  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Running}}
	I0919 18:40:22.032291  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:22.055098  293537 cli_runner.go:164] Run: docker exec addons-971880 stat /var/lib/dpkg/alternatives/iptables
	I0919 18:40:22.125415  293537 oci.go:144] the created container "addons-971880" has a running status.
	I0919 18:40:22.125445  293537 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa...
	I0919 18:40:22.576988  293537 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 18:40:22.615973  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:22.638224  293537 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 18:40:22.638243  293537 kic_runner.go:114] Args: [docker exec --privileged addons-971880 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 18:40:22.722473  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:22.742554  293537 machine.go:93] provisionDockerMachine start ...
	I0919 18:40:22.743352  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:22.774687  293537 main.go:141] libmachine: Using SSH client type: native
	I0919 18:40:22.774949  293537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0919 18:40:22.774959  293537 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 18:40:22.948505  293537 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-971880
	
	I0919 18:40:22.948580  293537 ubuntu.go:169] provisioning hostname "addons-971880"
	I0919 18:40:22.948677  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:22.969896  293537 main.go:141] libmachine: Using SSH client type: native
	I0919 18:40:22.970140  293537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0919 18:40:22.970160  293537 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-971880 && echo "addons-971880" | sudo tee /etc/hostname
	I0919 18:40:23.142085  293537 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-971880
	
	I0919 18:40:23.142233  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:23.173045  293537 main.go:141] libmachine: Using SSH client type: native
	I0919 18:40:23.173282  293537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0919 18:40:23.173299  293537 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-971880' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-971880/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-971880' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 18:40:23.320150  293537 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 18:40:23.320184  293537 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19664-287261/.minikube CaCertPath:/home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19664-287261/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19664-287261/.minikube}
	I0919 18:40:23.320208  293537 ubuntu.go:177] setting up certificates
	I0919 18:40:23.320217  293537 provision.go:84] configureAuth start
	I0919 18:40:23.320288  293537 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-971880
	I0919 18:40:23.336724  293537 provision.go:143] copyHostCerts
	I0919 18:40:23.336810  293537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19664-287261/.minikube/ca.pem (1082 bytes)
	I0919 18:40:23.336932  293537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19664-287261/.minikube/cert.pem (1123 bytes)
	I0919 18:40:23.337048  293537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19664-287261/.minikube/key.pem (1675 bytes)
	I0919 18:40:23.337107  293537 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19664-287261/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca-key.pem org=jenkins.addons-971880 san=[127.0.0.1 192.168.49.2 addons-971880 localhost minikube]
	I0919 18:40:23.784639  293537 provision.go:177] copyRemoteCerts
	I0919 18:40:23.784720  293537 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 18:40:23.784763  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:23.802489  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:23.909246  293537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 18:40:23.934171  293537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 18:40:23.958543  293537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 18:40:23.982904  293537 provision.go:87] duration metric: took 662.664687ms to configureAuth
	I0919 18:40:23.982931  293537 ubuntu.go:193] setting minikube options for container-runtime
	I0919 18:40:23.983122  293537 config.go:182] Loaded profile config "addons-971880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 18:40:23.983236  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:24.012307  293537 main.go:141] libmachine: Using SSH client type: native
	I0919 18:40:24.012571  293537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0919 18:40:24.012592  293537 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 18:40:24.296885  293537 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 18:40:24.296910  293537 machine.go:96] duration metric: took 1.554333983s to provisionDockerMachine
	I0919 18:40:24.296921  293537 client.go:171] duration metric: took 9.31141665s to LocalClient.Create
	I0919 18:40:24.296935  293537 start.go:167] duration metric: took 9.311489709s to libmachine.API.Create "addons-971880"
	I0919 18:40:24.296951  293537 start.go:293] postStartSetup for "addons-971880" (driver="docker")
	I0919 18:40:24.296965  293537 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 18:40:24.297040  293537 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 18:40:24.297084  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:24.314189  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:24.421363  293537 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 18:40:24.424465  293537 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 18:40:24.424502  293537 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 18:40:24.424514  293537 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 18:40:24.424521  293537 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 18:40:24.424532  293537 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-287261/.minikube/addons for local assets ...
	I0919 18:40:24.424607  293537 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-287261/.minikube/files for local assets ...
	I0919 18:40:24.424637  293537 start.go:296] duration metric: took 127.676808ms for postStartSetup
	I0919 18:40:24.424947  293537 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-971880
	I0919 18:40:24.441276  293537 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/config.json ...
	I0919 18:40:24.441573  293537 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 18:40:24.441628  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:24.457539  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:24.557015  293537 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 18:40:24.561316  293537 start.go:128] duration metric: took 9.578811258s to createHost
	I0919 18:40:24.561341  293537 start.go:83] releasing machines lock for "addons-971880", held for 9.578944592s
	I0919 18:40:24.561411  293537 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-971880
	I0919 18:40:24.576931  293537 ssh_runner.go:195] Run: cat /version.json
	I0919 18:40:24.576990  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:24.576994  293537 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 18:40:24.577069  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:24.594043  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:24.600367  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:24.825657  293537 ssh_runner.go:195] Run: systemctl --version
	I0919 18:40:24.829981  293537 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 18:40:24.973384  293537 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 18:40:24.977678  293537 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 18:40:24.998966  293537 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0919 18:40:24.999140  293537 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 18:40:25.045694  293537 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 18:40:25.045717  293537 start.go:495] detecting cgroup driver to use...
	I0919 18:40:25.045766  293537 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0919 18:40:25.045818  293537 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 18:40:25.065419  293537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 18:40:25.077859  293537 docker.go:217] disabling cri-docker service (if available) ...
	I0919 18:40:25.077968  293537 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 18:40:25.094706  293537 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 18:40:25.112860  293537 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 18:40:25.209683  293537 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 18:40:25.302151  293537 docker.go:233] disabling docker service ...
	I0919 18:40:25.302273  293537 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 18:40:25.323334  293537 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 18:40:25.336378  293537 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 18:40:25.429738  293537 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 18:40:25.535609  293537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 18:40:25.547524  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 18:40:25.564274  293537 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0919 18:40:25.564345  293537 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:40:25.574971  293537 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 18:40:25.575106  293537 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:40:25.586035  293537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:40:25.596962  293537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:40:25.607358  293537 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 18:40:25.617457  293537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:40:25.627519  293537 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:40:25.643763  293537 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:40:25.653582  293537 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 18:40:25.662617  293537 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 18:40:25.671391  293537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 18:40:25.758584  293537 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 18:40:25.881679  293537 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 18:40:25.881797  293537 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 18:40:25.885692  293537 start.go:563] Will wait 60s for crictl version
	I0919 18:40:25.885756  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:40:25.889290  293537 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 18:40:25.931872  293537 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0919 18:40:25.932001  293537 ssh_runner.go:195] Run: crio --version
	I0919 18:40:25.972764  293537 ssh_runner.go:195] Run: crio --version
	I0919 18:40:26.020911  293537 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0919 18:40:26.023368  293537 cli_runner.go:164] Run: docker network inspect addons-971880 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 18:40:26.039908  293537 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 18:40:26.044177  293537 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 18:40:26.057328  293537 kubeadm.go:883] updating cluster {Name:addons-971880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-971880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 18:40:26.057469  293537 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 18:40:26.057534  293537 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 18:40:26.133555  293537 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 18:40:26.133583  293537 crio.go:433] Images already preloaded, skipping extraction
	I0919 18:40:26.133643  293537 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 18:40:26.173236  293537 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 18:40:26.173261  293537 cache_images.go:84] Images are preloaded, skipping loading
	I0919 18:40:26.173270  293537 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0919 18:40:26.173424  293537 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-971880 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-971880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 18:40:26.173545  293537 ssh_runner.go:195] Run: crio config
	I0919 18:40:26.220780  293537 cni.go:84] Creating CNI manager for ""
	I0919 18:40:26.220804  293537 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 18:40:26.220815  293537 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 18:40:26.220841  293537 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-971880 NodeName:addons-971880 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 18:40:26.220981  293537 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-971880"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 18:40:26.221063  293537 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0919 18:40:26.230055  293537 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 18:40:26.230128  293537 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 18:40:26.239075  293537 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0919 18:40:26.257194  293537 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 18:40:26.275405  293537 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0919 18:40:26.294207  293537 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0919 18:40:26.297608  293537 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 18:40:26.308590  293537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 18:40:26.398728  293537 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 18:40:26.412875  293537 certs.go:68] Setting up /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880 for IP: 192.168.49.2
	I0919 18:40:26.412939  293537 certs.go:194] generating shared ca certs ...
	I0919 18:40:26.412971  293537 certs.go:226] acquiring lock for ca certs: {Name:mk523f1ff29ba1b125a662d8a16466e488af99fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:26.413155  293537 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19664-287261/.minikube/ca.key
	I0919 18:40:27.099466  293537 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-287261/.minikube/ca.crt ...
	I0919 18:40:27.099502  293537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-287261/.minikube/ca.crt: {Name:mk72ad373d845c3dfe8b530e275b045be3f9ea44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:27.099743  293537 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-287261/.minikube/ca.key ...
	I0919 18:40:27.099758  293537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-287261/.minikube/ca.key: {Name:mk6927d0aa607f1c3942a9244061e169aede669f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:27.099875  293537 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19664-287261/.minikube/proxy-client-ca.key
	I0919 18:40:27.690254  293537 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-287261/.minikube/proxy-client-ca.crt ...
	I0919 18:40:27.690284  293537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-287261/.minikube/proxy-client-ca.crt: {Name:mka95663104efa43935e2407319e69b9f1a74e15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:27.690470  293537 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-287261/.minikube/proxy-client-ca.key ...
	I0919 18:40:27.690482  293537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-287261/.minikube/proxy-client-ca.key: {Name:mk6fc29661ffdcbf98927cc74a4761e2f385ba1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:27.690561  293537 certs.go:256] generating profile certs ...
	I0919 18:40:27.690623  293537 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.key
	I0919 18:40:27.690651  293537 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.crt with IP's: []
	I0919 18:40:28.051916  293537 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.crt ...
	I0919 18:40:28.051949  293537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.crt: {Name:mke5e1b1ca475791e881a9b267a71ff7d5e349d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:28.052153  293537 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.key ...
	I0919 18:40:28.052169  293537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.key: {Name:mk22f66e5d44e53266af14f016ae74fdede1016f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:28.052261  293537 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/apiserver.key.5bcc1d6f
	I0919 18:40:28.052281  293537 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/apiserver.crt.5bcc1d6f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0919 18:40:28.439619  293537 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/apiserver.crt.5bcc1d6f ...
	I0919 18:40:28.439652  293537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/apiserver.crt.5bcc1d6f: {Name:mk5ef899798c2f7f8cf7a6ca8b6bd7730a17a415 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:28.439841  293537 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/apiserver.key.5bcc1d6f ...
	I0919 18:40:28.439855  293537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/apiserver.key.5bcc1d6f: {Name:mkeaf10cc0c4d5344f5ac3188436e53b1f1f489d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:28.439951  293537 certs.go:381] copying /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/apiserver.crt.5bcc1d6f -> /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/apiserver.crt
	I0919 18:40:28.440041  293537 certs.go:385] copying /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/apiserver.key.5bcc1d6f -> /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/apiserver.key
	I0919 18:40:28.440125  293537 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/proxy-client.key
	I0919 18:40:28.440146  293537 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/proxy-client.crt with IP's: []
	I0919 18:40:28.762615  293537 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/proxy-client.crt ...
	I0919 18:40:28.762647  293537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/proxy-client.crt: {Name:mkc47d434d3ac3df7a1893f6cdfe2041dc8c73e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:28.762858  293537 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/proxy-client.key ...
	I0919 18:40:28.762874  293537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/proxy-client.key: {Name:mk13c604db6dc59e6437e08ad373c38c986c71d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:28.763079  293537 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 18:40:28.763126  293537 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca.pem (1082 bytes)
	I0919 18:40:28.763158  293537 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/cert.pem (1123 bytes)
	I0919 18:40:28.763190  293537 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/key.pem (1675 bytes)
	I0919 18:40:28.763827  293537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 18:40:28.788710  293537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 18:40:28.813437  293537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 18:40:28.843050  293537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 18:40:28.867629  293537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0919 18:40:28.892447  293537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 18:40:28.919243  293537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 18:40:28.946630  293537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 18:40:28.971651  293537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 18:40:28.996622  293537 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 18:40:29.016914  293537 ssh_runner.go:195] Run: openssl version
	I0919 18:40:29.022790  293537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 18:40:29.032837  293537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 18:40:29.036589  293537 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0919 18:40:29.036657  293537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 18:40:29.043641  293537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 18:40:29.053700  293537 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 18:40:29.057830  293537 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 18:40:29.057902  293537 kubeadm.go:392] StartCluster: {Name:addons-971880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-971880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 18:40:29.058001  293537 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 18:40:29.058061  293537 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 18:40:29.100267  293537 cri.go:89] found id: ""
	I0919 18:40:29.100339  293537 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 18:40:29.109720  293537 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 18:40:29.118559  293537 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 18:40:29.118644  293537 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 18:40:29.127755  293537 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 18:40:29.127779  293537 kubeadm.go:157] found existing configuration files:
	
	I0919 18:40:29.127861  293537 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 18:40:29.136373  293537 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 18:40:29.136470  293537 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 18:40:29.145139  293537 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 18:40:29.154300  293537 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 18:40:29.154371  293537 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 18:40:29.162969  293537 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 18:40:29.172062  293537 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 18:40:29.172201  293537 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 18:40:29.180912  293537 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 18:40:29.189802  293537 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 18:40:29.189895  293537 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 18:40:29.198252  293537 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 18:40:29.242636  293537 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0919 18:40:29.242730  293537 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 18:40:29.263410  293537 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 18:40:29.263486  293537 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0919 18:40:29.263526  293537 kubeadm.go:310] OS: Linux
	I0919 18:40:29.263578  293537 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 18:40:29.263638  293537 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0919 18:40:29.263690  293537 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 18:40:29.263742  293537 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 18:40:29.263795  293537 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 18:40:29.263853  293537 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 18:40:29.263910  293537 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 18:40:29.263966  293537 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 18:40:29.264017  293537 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0919 18:40:29.324338  293537 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 18:40:29.324483  293537 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 18:40:29.324600  293537 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 18:40:29.332452  293537 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 18:40:29.337268  293537 out.go:235]   - Generating certificates and keys ...
	I0919 18:40:29.337370  293537 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 18:40:29.337440  293537 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 18:40:29.819408  293537 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 18:40:30.596636  293537 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 18:40:31.221718  293537 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 18:40:31.614141  293537 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 18:40:31.765095  293537 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 18:40:31.765651  293537 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-971880 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 18:40:32.058450  293537 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 18:40:32.058584  293537 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-971880 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 18:40:32.624269  293537 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 18:40:32.992299  293537 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 18:40:33.509180  293537 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 18:40:33.509495  293537 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 18:40:33.874069  293537 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 18:40:34.248453  293537 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 18:40:34.476867  293537 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 18:40:34.768121  293537 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 18:40:34.973586  293537 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 18:40:34.974364  293537 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 18:40:34.977489  293537 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 18:40:34.980287  293537 out.go:235]   - Booting up control plane ...
	I0919 18:40:34.980416  293537 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 18:40:34.980503  293537 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 18:40:34.981705  293537 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 18:40:34.992817  293537 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 18:40:35.003887  293537 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 18:40:35.004094  293537 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 18:40:35.102215  293537 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 18:40:35.102357  293537 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 18:40:37.103366  293537 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.001367465s
	I0919 18:40:37.103468  293537 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0919 18:40:43.109466  293537 kubeadm.go:310] [api-check] The API server is healthy after 6.004105102s
	I0919 18:40:43.126717  293537 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 18:40:43.141419  293537 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 18:40:43.170749  293537 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 18:40:43.170964  293537 kubeadm.go:310] [mark-control-plane] Marking the node addons-971880 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 18:40:43.182173  293537 kubeadm.go:310] [bootstrap-token] Using token: ebqgh7.vowgkmg5fzhkih57
	I0919 18:40:43.184491  293537 out.go:235]   - Configuring RBAC rules ...
	I0919 18:40:43.184636  293537 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 18:40:43.189100  293537 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 18:40:43.198269  293537 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 18:40:43.201802  293537 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 18:40:43.205374  293537 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 18:40:43.209929  293537 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 18:40:43.515171  293537 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 18:40:43.950419  293537 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 18:40:44.514706  293537 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 18:40:44.516367  293537 kubeadm.go:310] 
	I0919 18:40:44.516445  293537 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 18:40:44.516459  293537 kubeadm.go:310] 
	I0919 18:40:44.516539  293537 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 18:40:44.516549  293537 kubeadm.go:310] 
	I0919 18:40:44.516575  293537 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 18:40:44.516640  293537 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 18:40:44.516698  293537 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 18:40:44.516707  293537 kubeadm.go:310] 
	I0919 18:40:44.516764  293537 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 18:40:44.516773  293537 kubeadm.go:310] 
	I0919 18:40:44.516823  293537 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 18:40:44.516832  293537 kubeadm.go:310] 
	I0919 18:40:44.516885  293537 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 18:40:44.516972  293537 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 18:40:44.517047  293537 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 18:40:44.517059  293537 kubeadm.go:310] 
	I0919 18:40:44.517143  293537 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 18:40:44.517237  293537 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 18:40:44.517248  293537 kubeadm.go:310] 
	I0919 18:40:44.517338  293537 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ebqgh7.vowgkmg5fzhkih57 \
	I0919 18:40:44.517446  293537 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e7e5d662c08ea043dbeea6d8ddc73c887c0affcdbd05da0c73a8636c5020b2b0 \
	I0919 18:40:44.517472  293537 kubeadm.go:310] 	--control-plane 
	I0919 18:40:44.517480  293537 kubeadm.go:310] 
	I0919 18:40:44.517565  293537 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 18:40:44.517574  293537 kubeadm.go:310] 
	I0919 18:40:44.517657  293537 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ebqgh7.vowgkmg5fzhkih57 \
	I0919 18:40:44.517766  293537 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e7e5d662c08ea043dbeea6d8ddc73c887c0affcdbd05da0c73a8636c5020b2b0 
	I0919 18:40:44.521437  293537 kubeadm.go:310] W0919 18:40:29.239267    1169 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0919 18:40:44.521742  293537 kubeadm.go:310] W0919 18:40:29.240213    1169 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0919 18:40:44.521961  293537 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0919 18:40:44.522073  293537 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 18:40:44.522100  293537 cni.go:84] Creating CNI manager for ""
	I0919 18:40:44.522107  293537 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 18:40:44.524557  293537 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0919 18:40:44.526468  293537 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 18:40:44.530881  293537 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0919 18:40:44.530902  293537 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0919 18:40:44.551529  293537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 18:40:44.830646  293537 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 18:40:44.830784  293537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:44.830899  293537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-971880 minikube.k8s.io/updated_at=2024_09_19T18_40_44_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef minikube.k8s.io/name=addons-971880 minikube.k8s.io/primary=true
	I0919 18:40:44.846703  293537 ops.go:34] apiserver oom_adj: -16
	I0919 18:40:44.988855  293537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:45.489713  293537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:45.988930  293537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:46.489778  293537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:46.989447  293537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:47.489853  293537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:47.988905  293537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:40:48.092838  293537 kubeadm.go:1113] duration metric: took 3.262100386s to wait for elevateKubeSystemPrivileges
	I0919 18:40:48.092865  293537 kubeadm.go:394] duration metric: took 19.034985288s to StartCluster
	I0919 18:40:48.092882  293537 settings.go:142] acquiring lock: {Name:mkc6a05e17453fceabfc207d0b4cc62ec1022659 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:48.093002  293537 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19664-287261/kubeconfig
	I0919 18:40:48.093407  293537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-287261/kubeconfig: {Name:mkfb909fdfd15278a636c3045acef421204406b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:40:48.093611  293537 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 18:40:48.093742  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 18:40:48.093981  293537 config.go:182] Loaded profile config "addons-971880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 18:40:48.094022  293537 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0919 18:40:48.094098  293537 addons.go:69] Setting yakd=true in profile "addons-971880"
	I0919 18:40:48.094113  293537 addons.go:234] Setting addon yakd=true in "addons-971880"
	I0919 18:40:48.094135  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:48.094641  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.095216  293537 addons.go:69] Setting cloud-spanner=true in profile "addons-971880"
	I0919 18:40:48.095236  293537 addons.go:234] Setting addon cloud-spanner=true in "addons-971880"
	I0919 18:40:48.095263  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:48.095702  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.095942  293537 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-971880"
	I0919 18:40:48.095971  293537 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-971880"
	I0919 18:40:48.096001  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:48.096486  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.099515  293537 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-971880"
	I0919 18:40:48.099580  293537 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-971880"
	I0919 18:40:48.099611  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:48.100085  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.103641  293537 addons.go:69] Setting default-storageclass=true in profile "addons-971880"
	I0919 18:40:48.103682  293537 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-971880"
	I0919 18:40:48.104031  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.104497  293537 addons.go:69] Setting registry=true in profile "addons-971880"
	I0919 18:40:48.104553  293537 addons.go:234] Setting addon registry=true in "addons-971880"
	I0919 18:40:48.104649  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:48.110424  293537 addons.go:69] Setting gcp-auth=true in profile "addons-971880"
	I0919 18:40:48.110516  293537 mustload.go:65] Loading cluster: addons-971880
	I0919 18:40:48.110775  293537 config.go:182] Loaded profile config "addons-971880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 18:40:48.111137  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.113963  293537 addons.go:69] Setting storage-provisioner=true in profile "addons-971880"
	I0919 18:40:48.114039  293537 addons.go:234] Setting addon storage-provisioner=true in "addons-971880"
	I0919 18:40:48.114115  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:48.114635  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.124272  293537 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-971880"
	I0919 18:40:48.124372  293537 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-971880"
	I0919 18:40:48.125252  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.126048  293537 addons.go:69] Setting ingress=true in profile "addons-971880"
	I0919 18:40:48.126119  293537 addons.go:234] Setting addon ingress=true in "addons-971880"
	I0919 18:40:48.128419  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:48.133638  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.139430  293537 addons.go:69] Setting ingress-dns=true in profile "addons-971880"
	I0919 18:40:48.139516  293537 addons.go:234] Setting addon ingress-dns=true in "addons-971880"
	I0919 18:40:48.139599  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:48.148412  293537 addons.go:69] Setting volcano=true in profile "addons-971880"
	I0919 18:40:48.148444  293537 addons.go:234] Setting addon volcano=true in "addons-971880"
	I0919 18:40:48.148485  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:48.148978  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.154658  293537 addons.go:69] Setting inspektor-gadget=true in profile "addons-971880"
	I0919 18:40:48.155015  293537 addons.go:234] Setting addon inspektor-gadget=true in "addons-971880"
	I0919 18:40:48.155265  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:48.160373  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.168211  293537 addons.go:69] Setting volumesnapshots=true in profile "addons-971880"
	I0919 18:40:48.168262  293537 addons.go:234] Setting addon volumesnapshots=true in "addons-971880"
	I0919 18:40:48.168315  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:48.168800  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.169585  293537 addons.go:69] Setting metrics-server=true in profile "addons-971880"
	I0919 18:40:48.169648  293537 addons.go:234] Setting addon metrics-server=true in "addons-971880"
	I0919 18:40:48.169697  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:48.170227  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.188308  293537 out.go:177] * Verifying Kubernetes components...
	I0919 18:40:48.192536  293537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 18:40:48.193478  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.200152  293537 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0919 18:40:48.203808  293537 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0919 18:40:48.203875  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0919 18:40:48.203989  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:48.225477  293537 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0919 18:40:48.227964  293537 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0919 18:40:48.228044  293537 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0919 18:40:48.228159  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:48.233391  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.247643  293537 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 18:40:48.247770  293537 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0919 18:40:48.249933  293537 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 18:40:48.249954  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 18:40:48.250022  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:48.250280  293537 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0919 18:40:48.250292  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0919 18:40:48.250331  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:48.268018  293537 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-971880"
	I0919 18:40:48.268064  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:48.268669  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.301994  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:48.321185  293537 addons.go:234] Setting addon default-storageclass=true in "addons-971880"
	I0919 18:40:48.321281  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:48.321773  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:48.335436  293537 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0919 18:40:48.370533  293537 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0919 18:40:48.379795  293537 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0919 18:40:48.386450  293537 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0919 18:40:48.386524  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0919 18:40:48.386624  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	W0919 18:40:48.424168  293537 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0919 18:40:48.424644  293537 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0919 18:40:48.425429  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 18:40:48.436157  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:48.442440  293537 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0919 18:40:48.443423  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:48.445300  293537 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0919 18:40:48.445323  293537 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0919 18:40:48.445395  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:48.456187  293537 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0919 18:40:48.457831  293537 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0919 18:40:48.460477  293537 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0919 18:40:48.460613  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:48.461087  293537 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0919 18:40:48.468240  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:48.470505  293537 out.go:177]   - Using image docker.io/registry:2.8.3
	I0919 18:40:48.470671  293537 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0919 18:40:48.470691  293537 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0919 18:40:48.470755  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:48.471236  293537 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0919 18:40:48.471287  293537 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0919 18:40:48.471381  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:48.476564  293537 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0919 18:40:48.478508  293537 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0919 18:40:48.478550  293537 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0919 18:40:48.485990  293537 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0919 18:40:48.486221  293537 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0919 18:40:48.486238  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0919 18:40:48.486306  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:48.492362  293537 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0919 18:40:48.492383  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0919 18:40:48.492450  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:48.507563  293537 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0919 18:40:48.510093  293537 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0919 18:40:48.520263  293537 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0919 18:40:48.522642  293537 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0919 18:40:48.522662  293537 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0919 18:40:48.522728  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:48.523732  293537 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0919 18:40:48.528259  293537 out.go:177]   - Using image docker.io/busybox:stable
	I0919 18:40:48.537887  293537 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0919 18:40:48.537911  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0919 18:40:48.537972  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:48.563951  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:48.620227  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:48.621676  293537 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 18:40:48.621692  293537 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 18:40:48.621753  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:48.656254  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:48.660010  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:48.662112  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:48.680282  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:48.691828  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:48.701064  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:48.730089  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:48.847079  293537 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0919 18:40:48.847155  293537 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0919 18:40:48.899693  293537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0919 18:40:48.952737  293537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 18:40:48.983419  293537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0919 18:40:49.059022  293537 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 18:40:49.066672  293537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0919 18:40:49.071175  293537 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0919 18:40:49.071244  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0919 18:40:49.089048  293537 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0919 18:40:49.089124  293537 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0919 18:40:49.105564  293537 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0919 18:40:49.105644  293537 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0919 18:40:49.141153  293537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 18:40:49.153728  293537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0919 18:40:49.171677  293537 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0919 18:40:49.171749  293537 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0919 18:40:49.196160  293537 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0919 18:40:49.196258  293537 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0919 18:40:49.201404  293537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0919 18:40:49.300209  293537 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0919 18:40:49.300237  293537 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0919 18:40:49.307634  293537 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 18:40:49.307707  293537 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0919 18:40:49.314732  293537 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0919 18:40:49.314805  293537 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0919 18:40:49.316388  293537 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0919 18:40:49.316451  293537 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0919 18:40:49.322479  293537 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0919 18:40:49.322560  293537 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0919 18:40:49.324957  293537 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0919 18:40:49.325025  293537 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0919 18:40:49.443569  293537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 18:40:49.465909  293537 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0919 18:40:49.465986  293537 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0919 18:40:49.486828  293537 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0919 18:40:49.486903  293537 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0919 18:40:49.490513  293537 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0919 18:40:49.490583  293537 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0919 18:40:49.497348  293537 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0919 18:40:49.497417  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0919 18:40:49.499708  293537 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0919 18:40:49.499771  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0919 18:40:49.604687  293537 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0919 18:40:49.604762  293537 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0919 18:40:49.622808  293537 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0919 18:40:49.622885  293537 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0919 18:40:49.638544  293537 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0919 18:40:49.638621  293537 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0919 18:40:49.675019  293537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0919 18:40:49.677046  293537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0919 18:40:49.714011  293537 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0919 18:40:49.714092  293537 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0919 18:40:49.716817  293537 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0919 18:40:49.716895  293537 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0919 18:40:49.762646  293537 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 18:40:49.762723  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0919 18:40:49.803578  293537 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0919 18:40:49.803657  293537 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0919 18:40:49.810913  293537 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0919 18:40:49.810986  293537 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0919 18:40:49.866155  293537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 18:40:49.879116  293537 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0919 18:40:49.879177  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0919 18:40:49.879534  293537 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0919 18:40:49.879572  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0919 18:40:49.968313  293537 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0919 18:40:49.968393  293537 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0919 18:40:49.996879  293537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0919 18:40:50.013103  293537 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0919 18:40:50.013191  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0919 18:40:50.050463  293537 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0919 18:40:50.050536  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0919 18:40:50.104006  293537 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0919 18:40:50.104091  293537 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0919 18:40:50.207761  293537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0919 18:40:52.084903  293537 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.659437294s)
	I0919 18:40:52.084932  293537 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0919 18:40:52.804936  293537 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-971880" context rescaled to 1 replicas
	I0919 18:40:52.874938  293537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.975162388s)
	I0919 18:40:53.461651  293537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.508878588s)
	I0919 18:40:53.462093  293537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.478595684s)
	I0919 18:40:53.462151  293537 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.403060239s)
	I0919 18:40:53.463230  293537 node_ready.go:35] waiting up to 6m0s for node "addons-971880" to be "Ready" ...
	I0919 18:40:53.463954  293537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.39718287s)
	I0919 18:40:53.468432  293537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.327207142s)
	W0919 18:40:53.594176  293537 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0919 18:40:54.547718  293537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.393904123s)
	I0919 18:40:54.547758  293537 addons.go:475] Verifying addon ingress=true in "addons-971880"
	I0919 18:40:54.548061  293537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.346577228s)
	I0919 18:40:54.548169  293537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.104526551s)
	I0919 18:40:54.548182  293537 addons.go:475] Verifying addon metrics-server=true in "addons-971880"
	I0919 18:40:54.548246  293537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.873146657s)
	I0919 18:40:54.548327  293537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.871207532s)
	I0919 18:40:54.548340  293537 addons.go:475] Verifying addon registry=true in "addons-971880"
	I0919 18:40:54.548534  293537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.682309401s)
	W0919 18:40:54.548991  293537 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0919 18:40:54.549023  293537 retry.go:31] will retry after 205.142793ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0919 18:40:54.548602  293537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.5516414s)
	I0919 18:40:54.551437  293537 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-971880 service yakd-dashboard -n yakd-dashboard
	
	I0919 18:40:54.551461  293537 out.go:177] * Verifying ingress addon...
	I0919 18:40:54.551445  293537 out.go:177] * Verifying registry addon...
	I0919 18:40:54.557513  293537 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0919 18:40:54.557513  293537 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0919 18:40:54.596066  293537 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0919 18:40:54.596198  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:54.601815  293537 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0919 18:40:54.601882  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:54.754480  293537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 18:40:54.864433  293537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.656576474s)
	I0919 18:40:54.864471  293537 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-971880"
	I0919 18:40:54.868277  293537 out.go:177] * Verifying csi-hostpath-driver addon...
	I0919 18:40:54.871943  293537 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0919 18:40:54.892561  293537 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0919 18:40:54.892589  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:55.065376  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:55.066290  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:55.378204  293537 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0919 18:40:55.378236  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:55.467473  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:40:55.562562  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:55.564541  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:55.878298  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:56.069085  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:56.070574  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:56.378665  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:56.563025  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:56.564886  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:56.877667  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:57.063752  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:57.064417  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:57.376416  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:57.467718  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:40:57.564440  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:57.564945  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:57.876755  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:58.064027  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:58.065697  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:58.092344  293537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.337768039s)
	I0919 18:40:58.378257  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:58.403319  293537 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0919 18:40:58.403425  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:58.443191  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:58.567155  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:58.567685  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:58.572168  293537 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0919 18:40:58.592692  293537 addons.go:234] Setting addon gcp-auth=true in "addons-971880"
	I0919 18:40:58.592803  293537 host.go:66] Checking if "addons-971880" exists ...
	I0919 18:40:58.593324  293537 cli_runner.go:164] Run: docker container inspect addons-971880 --format={{.State.Status}}
	I0919 18:40:58.612730  293537 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0919 18:40:58.612786  293537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-971880
	I0919 18:40:58.630178  293537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/addons-971880/id_rsa Username:docker}
	I0919 18:40:58.730014  293537 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0919 18:40:58.732139  293537 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0919 18:40:58.734124  293537 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0919 18:40:58.734146  293537 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0919 18:40:58.768295  293537 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0919 18:40:58.768320  293537 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0919 18:40:58.797975  293537 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0919 18:40:58.797994  293537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0919 18:40:58.817821  293537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0919 18:40:58.876322  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:59.072911  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:59.074414  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:59.382599  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:59.438129  293537 addons.go:475] Verifying addon gcp-auth=true in "addons-971880"
	I0919 18:40:59.440631  293537 out.go:177] * Verifying gcp-auth addon...
	I0919 18:40:59.442860  293537 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0919 18:40:59.464789  293537 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0919 18:40:59.464814  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:59.480890  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:40:59.561671  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:59.562964  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:59.875651  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:59.946736  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:00.070465  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:00.077137  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:00.384708  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:00.448341  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:00.582774  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:00.583004  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:00.877719  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:00.948225  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:01.065344  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:01.067794  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:01.375354  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:01.448227  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:01.561780  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:01.562960  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:01.875771  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:01.945881  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:01.966831  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:41:02.062563  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:02.062799  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:02.376547  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:02.447352  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:02.561779  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:02.562611  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:02.875580  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:02.946256  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:03.061831  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:03.062387  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:03.375870  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:03.446437  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:03.560976  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:03.561891  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:03.875202  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:03.947962  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:03.967037  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:41:04.061480  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:04.062379  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:04.375941  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:04.446238  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:04.562468  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:04.562877  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:04.875285  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:04.946660  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:05.062465  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:05.062886  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:05.376001  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:05.446912  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:05.561838  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:05.562615  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:05.875421  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:05.946657  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:06.061543  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:06.062589  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:06.375196  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:06.446960  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:06.466347  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:41:06.562063  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:06.562764  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:06.875596  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:06.946898  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:07.061921  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:07.063181  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:07.375810  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:07.446026  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:07.561428  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:07.562546  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:07.875172  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:07.946505  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:08.061499  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:08.062816  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:08.376019  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:08.446272  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:08.467168  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:41:08.562612  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:08.562946  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:08.876335  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:08.946735  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:09.061910  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:09.062619  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:09.375133  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:09.447206  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:09.561421  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:09.562389  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:09.876131  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:09.946813  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:10.062507  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:10.064607  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:10.375353  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:10.446904  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:10.562895  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:10.563990  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:10.875793  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:10.946554  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:10.967028  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:41:11.061932  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:11.063150  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:11.375830  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:11.446348  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:11.561653  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:11.563206  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:11.875920  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:11.946654  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:12.061917  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:12.062460  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:12.375786  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:12.446012  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:12.562540  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:12.562908  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:12.877867  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:12.946299  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:13.060991  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:13.062119  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:13.375793  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:13.445883  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:13.466546  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:41:13.561960  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:13.562612  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:13.875666  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:13.947427  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:14.061694  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:14.062464  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:14.376511  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:14.446294  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:14.562791  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:14.563547  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:14.875418  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:14.946605  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:15.062005  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:15.063964  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:15.377135  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:15.446681  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:15.562379  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:15.562700  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:15.877033  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:15.946231  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:15.966501  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:41:16.062211  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:16.063155  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:16.376819  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:16.446617  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:16.563906  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:16.565097  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:16.876051  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:16.946253  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:17.066866  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:17.067521  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:17.376509  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:17.446316  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:17.561816  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:17.562038  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:17.875656  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:17.946877  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:17.966970  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:41:18.061223  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:18.062175  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:18.376287  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:18.446551  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:18.561931  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:18.562843  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:18.875329  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:18.947103  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:19.061452  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:19.062306  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:19.375857  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:19.446215  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:19.561838  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:19.562763  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:19.875704  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:19.946970  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:19.967322  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:41:20.061991  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:20.063004  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:20.375819  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:20.446018  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:20.561673  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:20.562416  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:20.875770  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:20.946314  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:21.061298  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:21.062118  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:21.376133  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:21.446523  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:21.561547  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:21.562572  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:21.875087  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:21.946665  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:22.061448  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:22.062440  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:22.375879  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:22.446019  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:22.467332  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:41:22.561928  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:22.562802  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:22.876174  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:22.947177  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:23.061819  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:23.062482  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:23.375560  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:23.446172  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:23.560905  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:23.562464  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:23.875348  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:23.947319  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:24.060883  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:24.062369  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:24.376201  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:24.446773  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:24.562406  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:24.563400  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:24.876177  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:24.947005  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:24.969870  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:41:25.060859  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:25.061661  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:25.375277  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:25.447052  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:25.561034  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:25.562067  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:25.875854  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:25.946102  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:26.061201  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:26.062358  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:26.376390  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:26.446809  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:26.561912  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:26.562731  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:26.875687  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:26.946990  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:27.060915  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:27.061831  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:27.375963  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:27.446755  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:27.466656  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:41:27.561455  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:27.562776  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:27.875854  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:27.946628  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:28.061527  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:28.062880  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:28.376492  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:28.446888  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:28.561880  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:28.563128  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:28.876058  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:28.947461  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:29.061576  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:29.062985  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:29.375644  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:29.446816  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:29.561107  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:29.561942  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:29.875796  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:29.945754  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:29.966769  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:41:30.062213  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:30.062963  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:30.376763  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:30.446860  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:30.561690  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:30.562513  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:30.875970  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:30.945998  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:31.062104  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:31.063092  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:31.375520  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:31.446646  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:31.561126  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:31.562097  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:31.875721  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:31.946673  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:32.061200  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:32.061808  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:32.376029  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:32.446717  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:32.466655  293537 node_ready.go:53] node "addons-971880" has status "Ready":"False"
	I0919 18:41:32.561644  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:32.562929  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:32.875835  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:32.966393  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:32.984056  293537 node_ready.go:49] node "addons-971880" has status "Ready":"True"
	I0919 18:41:32.984085  293537 node_ready.go:38] duration metric: took 39.520822677s for node "addons-971880" to be "Ready" ...
	I0919 18:41:32.984096  293537 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 18:41:33.035725  293537 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-lzshk" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:33.085442  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:33.086727  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:33.401079  293537 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0919 18:41:33.401109  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:33.449540  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:33.562204  293537 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0919 18:41:33.562272  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:33.562938  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:33.879142  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:33.979993  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:34.083770  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:34.085152  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:34.397059  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:34.486406  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:34.549855  293537 pod_ready.go:93] pod "coredns-7c65d6cfc9-lzshk" in "kube-system" namespace has status "Ready":"True"
	I0919 18:41:34.549882  293537 pod_ready.go:82] duration metric: took 1.514119286s for pod "coredns-7c65d6cfc9-lzshk" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:34.549904  293537 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-971880" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:34.557730  293537 pod_ready.go:93] pod "etcd-addons-971880" in "kube-system" namespace has status "Ready":"True"
	I0919 18:41:34.557755  293537 pod_ready.go:82] duration metric: took 7.843669ms for pod "etcd-addons-971880" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:34.558059  293537 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-971880" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:34.564913  293537 pod_ready.go:93] pod "kube-apiserver-addons-971880" in "kube-system" namespace has status "Ready":"True"
	I0919 18:41:34.564937  293537 pod_ready.go:82] duration metric: took 6.858144ms for pod "kube-apiserver-addons-971880" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:34.564948  293537 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-971880" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:34.570244  293537 pod_ready.go:93] pod "kube-controller-manager-addons-971880" in "kube-system" namespace has status "Ready":"True"
	I0919 18:41:34.570267  293537 pod_ready.go:82] duration metric: took 5.311429ms for pod "kube-controller-manager-addons-971880" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:34.570281  293537 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pf8wk" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:34.587641  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:34.589693  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:34.607709  293537 pod_ready.go:93] pod "kube-proxy-pf8wk" in "kube-system" namespace has status "Ready":"True"
	I0919 18:41:34.607736  293537 pod_ready.go:82] duration metric: took 37.446262ms for pod "kube-proxy-pf8wk" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:34.607748  293537 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-971880" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:34.883869  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:34.946929  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:34.968272  293537 pod_ready.go:93] pod "kube-scheduler-addons-971880" in "kube-system" namespace has status "Ready":"True"
	I0919 18:41:34.968298  293537 pod_ready.go:82] duration metric: took 360.543214ms for pod "kube-scheduler-addons-971880" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:34.968310  293537 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:35.066332  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:35.067047  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:35.377071  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:35.446694  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:35.563116  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:35.564514  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:35.878270  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:35.947116  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:36.062668  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:36.064093  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:36.378169  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:36.446076  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:36.562355  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:36.563611  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:36.877416  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:36.946831  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:36.976090  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:37.070127  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:37.070708  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:37.378036  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:37.449197  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:37.572857  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:37.574137  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:37.878958  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:37.947687  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:38.066635  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:38.068196  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:38.379960  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:38.447036  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:38.566180  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:38.567164  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:38.878059  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:38.948415  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:39.065678  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:39.068345  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:39.382645  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:39.446403  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:39.474388  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:39.561643  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:39.562705  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:39.876574  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:39.948622  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:40.064799  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:40.070969  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:40.378517  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:40.447109  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:40.563248  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:40.564262  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:40.878488  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:40.947935  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:41.066945  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:41.068000  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:41.377261  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:41.447055  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:41.475670  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:41.564547  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:41.565894  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:41.877348  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:41.946812  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:42.063870  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:42.065853  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:42.378089  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:42.447279  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:42.562819  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:42.564637  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:42.877041  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:42.947027  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:43.063723  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:43.066318  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:43.378706  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:43.447494  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:43.475801  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:43.562902  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:43.565584  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:43.878005  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:43.958649  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:44.063440  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:44.064670  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:44.378042  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:44.446376  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:44.566188  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:44.567897  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:44.885274  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:44.948492  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:45.083599  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:45.091434  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:45.382104  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:45.479271  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:45.481202  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:45.565683  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:45.566749  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:45.877414  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:45.947825  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:46.065683  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:46.067689  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:46.377216  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:46.447142  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:46.564574  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:46.566078  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:46.879355  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:46.976594  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:47.062108  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:47.063081  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:47.377187  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:47.446380  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:47.563241  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:47.563925  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:47.877203  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:47.946716  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:47.974788  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:48.062852  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:48.063938  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:48.379041  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:48.446882  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:48.564580  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:48.567894  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:48.877326  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:48.947262  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:49.064573  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:49.065888  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:49.377713  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:49.446539  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:49.563561  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:49.564620  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:49.876718  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:49.946923  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:49.976834  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:50.062984  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:50.063275  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:50.378977  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:50.477858  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:50.562860  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:50.563269  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:50.877622  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:50.946366  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:51.061768  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:51.062839  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:51.379398  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:51.478496  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:51.564227  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:51.564662  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:51.877074  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:51.946553  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:52.062951  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:52.063955  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:52.377899  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:52.446850  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:52.479114  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:52.563898  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:52.565470  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:52.878227  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:52.947109  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:53.066828  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:53.066812  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:53.389853  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:53.450162  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:53.568765  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:53.569814  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:53.877173  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:53.947676  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:54.062677  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:54.064006  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:54.377637  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:54.447160  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:54.563690  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:54.565109  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:54.878630  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:54.949386  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:54.976344  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:55.065055  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:55.065708  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:55.377239  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:55.447929  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:55.565896  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:55.566378  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:55.877242  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:55.946425  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:56.062875  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:56.063179  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:56.379895  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:56.447076  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:56.562740  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:56.563473  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:56.877550  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:56.947753  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:57.067682  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:57.070082  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:57.377976  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:57.447187  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:57.475294  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:57.569051  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:57.570062  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:57.877048  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:57.983847  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:58.077525  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:58.079554  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:58.380087  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:58.446658  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:58.563211  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:58.564086  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:58.877276  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:58.946236  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:59.062712  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:59.062998  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:59.377963  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:59.446451  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:59.563498  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:59.564989  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:59.876502  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:59.947957  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:59.977211  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:00.121226  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:00.133316  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:00.391137  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:00.472275  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:00.566019  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:00.569802  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:00.877649  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:00.947625  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:01.066312  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:01.068223  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:01.377479  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:01.447220  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:01.563581  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:01.566404  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:01.877374  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:01.950613  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:01.979124  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:02.084459  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:02.085060  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:02.378285  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:02.447656  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:02.564352  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:02.566754  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:02.877376  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:02.979247  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:03.078289  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:03.078836  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:03.377708  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:03.447086  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:03.561944  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:03.563509  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:03.877703  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:03.950350  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:04.062096  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:04.064223  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:04.377278  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:04.446833  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:04.475318  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:04.562383  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:04.563641  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:04.884659  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:04.989733  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:05.061214  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:05.063030  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:05.377498  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:05.447815  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:05.565160  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:05.567761  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:05.876913  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:05.950444  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:06.063189  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:06.064164  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:06.379772  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:06.446607  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:06.478893  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:06.566184  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:06.567288  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:06.879373  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:06.948136  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:07.067070  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:07.072076  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:07.377008  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:07.446697  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:07.569679  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:07.571369  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:07.880635  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:07.947236  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:08.071737  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:08.077335  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:08.378546  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:08.447335  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:08.564632  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:08.565221  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:08.877848  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:08.946934  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:08.974974  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:09.064653  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:09.065797  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:09.377975  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:09.476947  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:09.563133  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:09.564689  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:09.876749  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:09.946248  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:10.062142  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:42:10.063644  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:10.377813  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:10.446860  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:10.562053  293537 kapi.go:107] duration metric: took 1m16.004535153s to wait for kubernetes.io/minikube-addons=registry ...
	I0919 18:42:10.562215  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:10.876987  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:10.946342  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:11.061971  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:11.377751  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:11.447271  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:11.480810  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:11.563706  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:11.877410  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:11.947282  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:12.063287  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:12.378511  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:12.446827  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:12.563797  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:12.877171  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:12.946514  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:13.063988  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:13.379347  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:13.450900  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:13.481526  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:13.573718  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:13.878016  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:13.954643  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:14.065257  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:14.379195  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:14.447732  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:14.566057  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:14.878940  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:14.947019  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:15.066043  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:15.377698  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:15.448279  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:15.564997  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:15.876343  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:15.946958  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:15.976796  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:16.063898  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:16.377681  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:16.477282  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:16.562688  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:16.878927  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:16.946784  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:17.063387  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:17.377333  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:17.446740  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:17.563508  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:17.883389  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:17.948827  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:17.985932  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:18.064777  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:18.395701  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:18.488133  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:18.563688  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:18.880224  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:18.947351  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:19.067004  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:19.378182  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:19.446917  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:19.562840  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:19.877728  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:19.948075  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:20.064123  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:20.377853  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:20.447732  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:20.480331  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:20.565156  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:20.878939  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:20.978062  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:21.062166  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:21.378624  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:21.447261  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:21.563076  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:21.876989  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:21.946848  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:22.068830  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:22.377684  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:22.484429  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:22.485223  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:22.578263  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:22.878134  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:22.947298  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:23.065838  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:23.376555  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:23.448395  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:23.565684  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:23.877495  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:23.951222  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:24.062074  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:24.377460  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:24.485068  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:24.488152  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:24.584713  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:24.876694  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:24.946971  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:25.062114  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:25.389522  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:25.447036  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:25.562186  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:25.876882  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:25.946314  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:26.062299  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:26.378575  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:26.463928  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:26.495554  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:26.568642  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:26.878105  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:26.946857  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:27.063120  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:27.378102  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:27.447089  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:27.562236  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:27.876843  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:27.945837  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:28.063213  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:28.378654  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:28.447524  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:28.562451  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:28.878401  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:28.947457  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:28.977975  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:29.063832  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:29.377289  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:29.446975  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:29.562465  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:29.877929  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:29.946408  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:30.063568  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:30.379021  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:30.449320  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:30.565273  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:30.880376  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:30.986980  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:31.063033  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:31.377706  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:31.448205  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:31.478392  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:31.565141  293537 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:42:31.877461  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:31.946903  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:32.062732  293537 kapi.go:107] duration metric: took 1m37.505219255s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0919 18:42:32.376733  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:32.448562  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:32.879931  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:32.978367  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:33.377007  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:33.452566  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:33.880325  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:33.960368  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:33.974903  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:34.376634  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:34.447204  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:34.881901  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:34.946224  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:35.377760  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:35.446114  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:42:35.878736  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:35.947223  293537 kapi.go:107] duration metric: took 1m36.504361675s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0919 18:42:35.949426  293537 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-971880 cluster.
	I0919 18:42:35.951872  293537 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0919 18:42:35.953970  293537 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0919 18:42:35.975189  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:36.377260  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:36.877815  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:37.377370  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:37.888310  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:37.982936  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:38.386669  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:38.877270  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:39.377530  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:39.877499  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:40.378293  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:40.475670  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:40.877392  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:41.376692  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:41.878130  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:42.378347  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:42.878515  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:42.977798  293537 pod_ready.go:103] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"False"
	I0919 18:42:43.377066  293537 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:42:43.878894  293537 kapi.go:107] duration metric: took 1m49.006949754s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0919 18:42:43.881074  293537 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, nvidia-device-plugin, default-storageclass, ingress-dns, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0919 18:42:43.884077  293537 addons.go:510] duration metric: took 1m55.790054032s for enable addons: enabled=[cloud-spanner storage-provisioner nvidia-device-plugin default-storageclass ingress-dns metrics-server inspektor-gadget yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0919 18:42:43.983903  293537 pod_ready.go:93] pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace has status "Ready":"True"
	I0919 18:42:43.983991  293537 pod_ready.go:82] duration metric: took 1m9.015672466s for pod "metrics-server-84c5f94fbc-jrbzm" in "kube-system" namespace to be "Ready" ...
	I0919 18:42:43.984031  293537 pod_ready.go:39] duration metric: took 1m10.999895399s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 18:42:43.984651  293537 api_server.go:52] waiting for apiserver process to appear ...
	I0919 18:42:43.984805  293537 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 18:42:43.984924  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 18:42:44.038733  293537 cri.go:89] found id: "a6739fa07ff39070e523d4cee920e861b82fa4bd2f9be9a5c18fec0b8df87a99"
	I0919 18:42:44.038756  293537 cri.go:89] found id: ""
	I0919 18:42:44.038765  293537 logs.go:276] 1 containers: [a6739fa07ff39070e523d4cee920e861b82fa4bd2f9be9a5c18fec0b8df87a99]
	I0919 18:42:44.038822  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:42:44.043249  293537 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 18:42:44.043334  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 18:42:44.088606  293537 cri.go:89] found id: "1a7797ceebe32b244c5f1a224638afd9f24c176c6cc7d597addad656dde2d48b"
	I0919 18:42:44.088631  293537 cri.go:89] found id: ""
	I0919 18:42:44.088639  293537 logs.go:276] 1 containers: [1a7797ceebe32b244c5f1a224638afd9f24c176c6cc7d597addad656dde2d48b]
	I0919 18:42:44.088700  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:42:44.092415  293537 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 18:42:44.092495  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 18:42:44.135646  293537 cri.go:89] found id: "c57cc379e1c9a8e90a13a5c1580cdc3da69b0f13cd6bd3c34135fb0ade3a0cf4"
	I0919 18:42:44.135670  293537 cri.go:89] found id: ""
	I0919 18:42:44.135678  293537 logs.go:276] 1 containers: [c57cc379e1c9a8e90a13a5c1580cdc3da69b0f13cd6bd3c34135fb0ade3a0cf4]
	I0919 18:42:44.135735  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:42:44.139218  293537 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 18:42:44.139291  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 18:42:44.179758  293537 cri.go:89] found id: "d599c639765e1953306d0695fea53619d96374405a473fbb100111dab8d56569"
	I0919 18:42:44.179782  293537 cri.go:89] found id: ""
	I0919 18:42:44.179790  293537 logs.go:276] 1 containers: [d599c639765e1953306d0695fea53619d96374405a473fbb100111dab8d56569]
	I0919 18:42:44.179856  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:42:44.184338  293537 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 18:42:44.184432  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 18:42:44.223834  293537 cri.go:89] found id: "dc4aa79f1b326b392f0966ca9b9067fb674897d417ae8cf8b7dbef4fe5de203a"
	I0919 18:42:44.223868  293537 cri.go:89] found id: ""
	I0919 18:42:44.223877  293537 logs.go:276] 1 containers: [dc4aa79f1b326b392f0966ca9b9067fb674897d417ae8cf8b7dbef4fe5de203a]
	I0919 18:42:44.223947  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:42:44.227670  293537 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 18:42:44.227745  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 18:42:44.264952  293537 cri.go:89] found id: "4e8ba4e202807048a02e5bf0c0ced036a2a964bcaf7a91f53bf6080712052341"
	I0919 18:42:44.264974  293537 cri.go:89] found id: ""
	I0919 18:42:44.264982  293537 logs.go:276] 1 containers: [4e8ba4e202807048a02e5bf0c0ced036a2a964bcaf7a91f53bf6080712052341]
	I0919 18:42:44.265042  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:42:44.268932  293537 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 18:42:44.269034  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 18:42:44.307612  293537 cri.go:89] found id: "dcda5994fb9dad3bdb5086f8926c508213459cae9deb54923f4dffd2852f1b82"
	I0919 18:42:44.307635  293537 cri.go:89] found id: ""
	I0919 18:42:44.307644  293537 logs.go:276] 1 containers: [dcda5994fb9dad3bdb5086f8926c508213459cae9deb54923f4dffd2852f1b82]
	I0919 18:42:44.307706  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:42:44.311797  293537 logs.go:123] Gathering logs for container status ...
	I0919 18:42:44.311840  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 18:42:44.363577  293537 logs.go:123] Gathering logs for kubelet ...
	I0919 18:42:44.363608  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0919 18:42:44.393941  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:32 addons-971880 kubelet[1465]: W0919 18:41:32.997203    1465 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-971880" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-971880' and this object
	W0919 18:42:44.394218  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:32 addons-971880 kubelet[1465]: E0919 18:41:32.997256    1465 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:44.394411  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.016421    1465 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-971880" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-971880' and this object
	W0919 18:42:44.394643  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.016471    1465 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:44.394822  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.016528    1465 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-971880" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-971880' and this object
	W0919 18:42:44.395044  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.016541    1465 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:44.395209  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.016587    1465 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-971880" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-971880' and this object
	W0919 18:42:44.395414  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.016607    1465 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:44.395601  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.016654    1465 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-971880" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-971880' and this object
	W0919 18:42:44.395828  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.016666    1465 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:44.396004  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.033708    1465 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-971880" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-971880' and this object
	W0919 18:42:44.396232  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.033776    1465 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:44.396400  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.078574    1465 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-971880" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-971880' and this object
	W0919 18:42:44.396607  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.078623    1465 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	I0919 18:42:44.454727  293537 logs.go:123] Gathering logs for describe nodes ...
	I0919 18:42:44.454772  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 18:42:44.643066  293537 logs.go:123] Gathering logs for etcd [1a7797ceebe32b244c5f1a224638afd9f24c176c6cc7d597addad656dde2d48b] ...
	I0919 18:42:44.643099  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a7797ceebe32b244c5f1a224638afd9f24c176c6cc7d597addad656dde2d48b"
	I0919 18:42:44.698468  293537 logs.go:123] Gathering logs for kube-scheduler [d599c639765e1953306d0695fea53619d96374405a473fbb100111dab8d56569] ...
	I0919 18:42:44.698502  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d599c639765e1953306d0695fea53619d96374405a473fbb100111dab8d56569"
	I0919 18:42:44.743288  293537 logs.go:123] Gathering logs for kube-controller-manager [4e8ba4e202807048a02e5bf0c0ced036a2a964bcaf7a91f53bf6080712052341] ...
	I0919 18:42:44.743317  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e8ba4e202807048a02e5bf0c0ced036a2a964bcaf7a91f53bf6080712052341"
	I0919 18:42:44.813056  293537 logs.go:123] Gathering logs for kindnet [dcda5994fb9dad3bdb5086f8926c508213459cae9deb54923f4dffd2852f1b82] ...
	I0919 18:42:44.813098  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcda5994fb9dad3bdb5086f8926c508213459cae9deb54923f4dffd2852f1b82"
	I0919 18:42:44.861228  293537 logs.go:123] Gathering logs for CRI-O ...
	I0919 18:42:44.861256  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 18:42:44.957892  293537 logs.go:123] Gathering logs for dmesg ...
	I0919 18:42:44.957933  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 18:42:44.974633  293537 logs.go:123] Gathering logs for kube-apiserver [a6739fa07ff39070e523d4cee920e861b82fa4bd2f9be9a5c18fec0b8df87a99] ...
	I0919 18:42:44.974662  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6739fa07ff39070e523d4cee920e861b82fa4bd2f9be9a5c18fec0b8df87a99"
	I0919 18:42:45.074514  293537 logs.go:123] Gathering logs for coredns [c57cc379e1c9a8e90a13a5c1580cdc3da69b0f13cd6bd3c34135fb0ade3a0cf4] ...
	I0919 18:42:45.075778  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c57cc379e1c9a8e90a13a5c1580cdc3da69b0f13cd6bd3c34135fb0ade3a0cf4"
	I0919 18:42:45.206965  293537 logs.go:123] Gathering logs for kube-proxy [dc4aa79f1b326b392f0966ca9b9067fb674897d417ae8cf8b7dbef4fe5de203a] ...
	I0919 18:42:45.207154  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc4aa79f1b326b392f0966ca9b9067fb674897d417ae8cf8b7dbef4fe5de203a"
	I0919 18:42:45.281778  293537 out.go:358] Setting ErrFile to fd 2...
	I0919 18:42:45.281818  293537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0919 18:42:45.281948  293537 out.go:270] X Problems detected in kubelet:
	W0919 18:42:45.281964  293537 out.go:270]   Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.016666    1465 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:45.281982  293537 out.go:270]   Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.033708    1465 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-971880" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-971880' and this object
	W0919 18:42:45.282001  293537 out.go:270]   Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.033776    1465 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:45.282028  293537 out.go:270]   Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.078574    1465 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-971880" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-971880' and this object
	W0919 18:42:45.282048  293537 out.go:270]   Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.078623    1465 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	I0919 18:42:45.282076  293537 out.go:358] Setting ErrFile to fd 2...
	I0919 18:42:45.282086  293537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:42:55.283244  293537 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 18:42:55.296761  293537 api_server.go:72] duration metric: took 2m7.20311709s to wait for apiserver process to appear ...
	I0919 18:42:55.296785  293537 api_server.go:88] waiting for apiserver healthz status ...
	I0919 18:42:55.297414  293537 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 18:42:55.297493  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 18:42:55.343738  293537 cri.go:89] found id: "a6739fa07ff39070e523d4cee920e861b82fa4bd2f9be9a5c18fec0b8df87a99"
	I0919 18:42:55.343760  293537 cri.go:89] found id: ""
	I0919 18:42:55.343768  293537 logs.go:276] 1 containers: [a6739fa07ff39070e523d4cee920e861b82fa4bd2f9be9a5c18fec0b8df87a99]
	I0919 18:42:55.343824  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:42:55.348178  293537 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 18:42:55.348259  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 18:42:55.387321  293537 cri.go:89] found id: "1a7797ceebe32b244c5f1a224638afd9f24c176c6cc7d597addad656dde2d48b"
	I0919 18:42:55.387344  293537 cri.go:89] found id: ""
	I0919 18:42:55.387352  293537 logs.go:276] 1 containers: [1a7797ceebe32b244c5f1a224638afd9f24c176c6cc7d597addad656dde2d48b]
	I0919 18:42:55.387410  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:42:55.391715  293537 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 18:42:55.391785  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 18:42:55.430903  293537 cri.go:89] found id: "c57cc379e1c9a8e90a13a5c1580cdc3da69b0f13cd6bd3c34135fb0ade3a0cf4"
	I0919 18:42:55.430932  293537 cri.go:89] found id: ""
	I0919 18:42:55.430941  293537 logs.go:276] 1 containers: [c57cc379e1c9a8e90a13a5c1580cdc3da69b0f13cd6bd3c34135fb0ade3a0cf4]
	I0919 18:42:55.431002  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:42:55.434917  293537 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 18:42:55.434994  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 18:42:55.477899  293537 cri.go:89] found id: "d599c639765e1953306d0695fea53619d96374405a473fbb100111dab8d56569"
	I0919 18:42:55.477921  293537 cri.go:89] found id: ""
	I0919 18:42:55.477929  293537 logs.go:276] 1 containers: [d599c639765e1953306d0695fea53619d96374405a473fbb100111dab8d56569]
	I0919 18:42:55.477984  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:42:55.481536  293537 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 18:42:55.481605  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 18:42:55.519995  293537 cri.go:89] found id: "dc4aa79f1b326b392f0966ca9b9067fb674897d417ae8cf8b7dbef4fe5de203a"
	I0919 18:42:55.520019  293537 cri.go:89] found id: ""
	I0919 18:42:55.520027  293537 logs.go:276] 1 containers: [dc4aa79f1b326b392f0966ca9b9067fb674897d417ae8cf8b7dbef4fe5de203a]
	I0919 18:42:55.520084  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:42:55.523730  293537 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 18:42:55.523808  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 18:42:55.563154  293537 cri.go:89] found id: "4e8ba4e202807048a02e5bf0c0ced036a2a964bcaf7a91f53bf6080712052341"
	I0919 18:42:55.563178  293537 cri.go:89] found id: ""
	I0919 18:42:55.563186  293537 logs.go:276] 1 containers: [4e8ba4e202807048a02e5bf0c0ced036a2a964bcaf7a91f53bf6080712052341]
	I0919 18:42:55.563270  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:42:55.567011  293537 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 18:42:55.567115  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 18:42:55.606868  293537 cri.go:89] found id: "dcda5994fb9dad3bdb5086f8926c508213459cae9deb54923f4dffd2852f1b82"
	I0919 18:42:55.606892  293537 cri.go:89] found id: ""
	I0919 18:42:55.606900  293537 logs.go:276] 1 containers: [dcda5994fb9dad3bdb5086f8926c508213459cae9deb54923f4dffd2852f1b82]
	I0919 18:42:55.606979  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:42:55.610547  293537 logs.go:123] Gathering logs for dmesg ...
	I0919 18:42:55.610575  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 18:42:55.626573  293537 logs.go:123] Gathering logs for kube-apiserver [a6739fa07ff39070e523d4cee920e861b82fa4bd2f9be9a5c18fec0b8df87a99] ...
	I0919 18:42:55.626606  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6739fa07ff39070e523d4cee920e861b82fa4bd2f9be9a5c18fec0b8df87a99"
	I0919 18:42:55.694807  293537 logs.go:123] Gathering logs for etcd [1a7797ceebe32b244c5f1a224638afd9f24c176c6cc7d597addad656dde2d48b] ...
	I0919 18:42:55.694847  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a7797ceebe32b244c5f1a224638afd9f24c176c6cc7d597addad656dde2d48b"
	I0919 18:42:55.746553  293537 logs.go:123] Gathering logs for coredns [c57cc379e1c9a8e90a13a5c1580cdc3da69b0f13cd6bd3c34135fb0ade3a0cf4] ...
	I0919 18:42:55.746589  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c57cc379e1c9a8e90a13a5c1580cdc3da69b0f13cd6bd3c34135fb0ade3a0cf4"
	I0919 18:42:55.790244  293537 logs.go:123] Gathering logs for kube-controller-manager [4e8ba4e202807048a02e5bf0c0ced036a2a964bcaf7a91f53bf6080712052341] ...
	I0919 18:42:55.790314  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e8ba4e202807048a02e5bf0c0ced036a2a964bcaf7a91f53bf6080712052341"
	I0919 18:42:55.858123  293537 logs.go:123] Gathering logs for kindnet [dcda5994fb9dad3bdb5086f8926c508213459cae9deb54923f4dffd2852f1b82] ...
	I0919 18:42:55.858161  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcda5994fb9dad3bdb5086f8926c508213459cae9deb54923f4dffd2852f1b82"
	I0919 18:42:55.899740  293537 logs.go:123] Gathering logs for kubelet ...
	I0919 18:42:55.899779  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0919 18:42:55.926340  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:32 addons-971880 kubelet[1465]: W0919 18:41:32.997203    1465 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-971880" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-971880' and this object
	W0919 18:42:55.926585  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:32 addons-971880 kubelet[1465]: E0919 18:41:32.997256    1465 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:55.926774  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.016421    1465 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-971880" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-971880' and this object
	W0919 18:42:55.927013  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.016471    1465 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:55.927192  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.016528    1465 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-971880" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-971880' and this object
	W0919 18:42:55.927416  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.016541    1465 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:55.927579  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.016587    1465 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-971880" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-971880' and this object
	W0919 18:42:55.927784  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.016607    1465 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:55.927976  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.016654    1465 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-971880" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-971880' and this object
	W0919 18:42:55.928213  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.016666    1465 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:55.928388  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.033708    1465 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-971880" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-971880' and this object
	W0919 18:42:55.928600  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.033776    1465 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:55.928771  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.078574    1465 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-971880" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-971880' and this object
	W0919 18:42:55.928980  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.078623    1465 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	I0919 18:42:55.987254  293537 logs.go:123] Gathering logs for describe nodes ...
	I0919 18:42:55.987289  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 18:42:56.137844  293537 logs.go:123] Gathering logs for kube-scheduler [d599c639765e1953306d0695fea53619d96374405a473fbb100111dab8d56569] ...
	I0919 18:42:56.137882  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d599c639765e1953306d0695fea53619d96374405a473fbb100111dab8d56569"
	I0919 18:42:56.191991  293537 logs.go:123] Gathering logs for kube-proxy [dc4aa79f1b326b392f0966ca9b9067fb674897d417ae8cf8b7dbef4fe5de203a] ...
	I0919 18:42:56.192025  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc4aa79f1b326b392f0966ca9b9067fb674897d417ae8cf8b7dbef4fe5de203a"
	I0919 18:42:56.234794  293537 logs.go:123] Gathering logs for CRI-O ...
	I0919 18:42:56.234827  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 18:42:56.325587  293537 logs.go:123] Gathering logs for container status ...
	I0919 18:42:56.325626  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 18:42:56.376152  293537 out.go:358] Setting ErrFile to fd 2...
	I0919 18:42:56.376180  293537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0919 18:42:56.376244  293537 out.go:270] X Problems detected in kubelet:
	W0919 18:42:56.376253  293537 out.go:270]   Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.016666    1465 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:56.376263  293537 out.go:270]   Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.033708    1465 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-971880" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-971880' and this object
	W0919 18:42:56.376271  293537 out.go:270]   Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.033776    1465 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:42:56.376278  293537 out.go:270]   Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.078574    1465 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-971880" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-971880' and this object
	W0919 18:42:56.376285  293537 out.go:270]   Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.078623    1465 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	I0919 18:42:56.376411  293537 out.go:358] Setting ErrFile to fd 2...
	I0919 18:42:56.376419  293537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:43:06.376913  293537 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 18:43:06.385497  293537 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 18:43:06.387607  293537 api_server.go:141] control plane version: v1.31.1
	I0919 18:43:06.387660  293537 api_server.go:131] duration metric: took 11.090867395s to wait for apiserver health ...
	I0919 18:43:06.387671  293537 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 18:43:06.387696  293537 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 18:43:06.387762  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 18:43:06.425666  293537 cri.go:89] found id: "a6739fa07ff39070e523d4cee920e861b82fa4bd2f9be9a5c18fec0b8df87a99"
	I0919 18:43:06.425689  293537 cri.go:89] found id: ""
	I0919 18:43:06.425697  293537 logs.go:276] 1 containers: [a6739fa07ff39070e523d4cee920e861b82fa4bd2f9be9a5c18fec0b8df87a99]
	I0919 18:43:06.425753  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:43:06.429431  293537 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 18:43:06.429509  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 18:43:06.466851  293537 cri.go:89] found id: "1a7797ceebe32b244c5f1a224638afd9f24c176c6cc7d597addad656dde2d48b"
	I0919 18:43:06.466875  293537 cri.go:89] found id: ""
	I0919 18:43:06.466883  293537 logs.go:276] 1 containers: [1a7797ceebe32b244c5f1a224638afd9f24c176c6cc7d597addad656dde2d48b]
	I0919 18:43:06.466939  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:43:06.470472  293537 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 18:43:06.470544  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 18:43:06.509833  293537 cri.go:89] found id: "c57cc379e1c9a8e90a13a5c1580cdc3da69b0f13cd6bd3c34135fb0ade3a0cf4"
	I0919 18:43:06.509856  293537 cri.go:89] found id: ""
	I0919 18:43:06.509865  293537 logs.go:276] 1 containers: [c57cc379e1c9a8e90a13a5c1580cdc3da69b0f13cd6bd3c34135fb0ade3a0cf4]
	I0919 18:43:06.509923  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:43:06.513953  293537 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 18:43:06.514030  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 18:43:06.554749  293537 cri.go:89] found id: "d599c639765e1953306d0695fea53619d96374405a473fbb100111dab8d56569"
	I0919 18:43:06.554774  293537 cri.go:89] found id: ""
	I0919 18:43:06.554783  293537 logs.go:276] 1 containers: [d599c639765e1953306d0695fea53619d96374405a473fbb100111dab8d56569]
	I0919 18:43:06.554845  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:43:06.558418  293537 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 18:43:06.558487  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 18:43:06.597281  293537 cri.go:89] found id: "dc4aa79f1b326b392f0966ca9b9067fb674897d417ae8cf8b7dbef4fe5de203a"
	I0919 18:43:06.597304  293537 cri.go:89] found id: ""
	I0919 18:43:06.597312  293537 logs.go:276] 1 containers: [dc4aa79f1b326b392f0966ca9b9067fb674897d417ae8cf8b7dbef4fe5de203a]
	I0919 18:43:06.597390  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:43:06.600882  293537 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 18:43:06.600987  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 18:43:06.640680  293537 cri.go:89] found id: "4e8ba4e202807048a02e5bf0c0ced036a2a964bcaf7a91f53bf6080712052341"
	I0919 18:43:06.640705  293537 cri.go:89] found id: ""
	I0919 18:43:06.640713  293537 logs.go:276] 1 containers: [4e8ba4e202807048a02e5bf0c0ced036a2a964bcaf7a91f53bf6080712052341]
	I0919 18:43:06.640779  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:43:06.644382  293537 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 18:43:06.644491  293537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 18:43:06.696347  293537 cri.go:89] found id: "dcda5994fb9dad3bdb5086f8926c508213459cae9deb54923f4dffd2852f1b82"
	I0919 18:43:06.696373  293537 cri.go:89] found id: ""
	I0919 18:43:06.696381  293537 logs.go:276] 1 containers: [dcda5994fb9dad3bdb5086f8926c508213459cae9deb54923f4dffd2852f1b82]
	I0919 18:43:06.696436  293537 ssh_runner.go:195] Run: which crictl
	I0919 18:43:06.700014  293537 logs.go:123] Gathering logs for dmesg ...
	I0919 18:43:06.700041  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 18:43:06.720003  293537 logs.go:123] Gathering logs for describe nodes ...
	I0919 18:43:06.720085  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 18:43:06.860572  293537 logs.go:123] Gathering logs for kube-apiserver [a6739fa07ff39070e523d4cee920e861b82fa4bd2f9be9a5c18fec0b8df87a99] ...
	I0919 18:43:06.860621  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6739fa07ff39070e523d4cee920e861b82fa4bd2f9be9a5c18fec0b8df87a99"
	I0919 18:43:06.916995  293537 logs.go:123] Gathering logs for kube-proxy [dc4aa79f1b326b392f0966ca9b9067fb674897d417ae8cf8b7dbef4fe5de203a] ...
	I0919 18:43:06.917032  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dc4aa79f1b326b392f0966ca9b9067fb674897d417ae8cf8b7dbef4fe5de203a"
	I0919 18:43:06.956031  293537 logs.go:123] Gathering logs for kubelet ...
	I0919 18:43:06.956059  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0919 18:43:06.980472  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:32 addons-971880 kubelet[1465]: W0919 18:41:32.997203    1465 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-971880" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-971880' and this object
	W0919 18:43:06.980836  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:32 addons-971880 kubelet[1465]: E0919 18:41:32.997256    1465 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:43:06.981031  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.016421    1465 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-971880" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-971880' and this object
	W0919 18:43:06.981267  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.016471    1465 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:43:06.981447  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.016528    1465 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-971880" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-971880' and this object
	W0919 18:43:06.981668  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.016541    1465 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:43:06.981833  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.016587    1465 reflector.go:561] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-971880" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-971880' and this object
	W0919 18:43:06.982037  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.016607    1465 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"secrets\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:43:06.982224  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.016654    1465 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-971880" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-971880' and this object
	W0919 18:43:06.982461  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.016666    1465 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:43:06.982633  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.033708    1465 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-971880" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-971880' and this object
	W0919 18:43:06.982849  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.033776    1465 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:43:06.983016  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.078574    1465 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-971880" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-971880' and this object
	W0919 18:43:06.983221  293537 logs.go:138] Found kubelet problem: Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.078623    1465 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	I0919 18:43:07.042579  293537 logs.go:123] Gathering logs for etcd [1a7797ceebe32b244c5f1a224638afd9f24c176c6cc7d597addad656dde2d48b] ...
	I0919 18:43:07.042616  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a7797ceebe32b244c5f1a224638afd9f24c176c6cc7d597addad656dde2d48b"
	I0919 18:43:07.101867  293537 logs.go:123] Gathering logs for coredns [c57cc379e1c9a8e90a13a5c1580cdc3da69b0f13cd6bd3c34135fb0ade3a0cf4] ...
	I0919 18:43:07.101904  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c57cc379e1c9a8e90a13a5c1580cdc3da69b0f13cd6bd3c34135fb0ade3a0cf4"
	I0919 18:43:07.146299  293537 logs.go:123] Gathering logs for kube-scheduler [d599c639765e1953306d0695fea53619d96374405a473fbb100111dab8d56569] ...
	I0919 18:43:07.146391  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d599c639765e1953306d0695fea53619d96374405a473fbb100111dab8d56569"
	I0919 18:43:07.195506  293537 logs.go:123] Gathering logs for kube-controller-manager [4e8ba4e202807048a02e5bf0c0ced036a2a964bcaf7a91f53bf6080712052341] ...
	I0919 18:43:07.195545  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e8ba4e202807048a02e5bf0c0ced036a2a964bcaf7a91f53bf6080712052341"
	I0919 18:43:07.269552  293537 logs.go:123] Gathering logs for kindnet [dcda5994fb9dad3bdb5086f8926c508213459cae9deb54923f4dffd2852f1b82] ...
	I0919 18:43:07.269590  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dcda5994fb9dad3bdb5086f8926c508213459cae9deb54923f4dffd2852f1b82"
	I0919 18:43:07.315873  293537 logs.go:123] Gathering logs for CRI-O ...
	I0919 18:43:07.315908  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 18:43:07.406127  293537 logs.go:123] Gathering logs for container status ...
	I0919 18:43:07.406168  293537 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 18:43:07.460453  293537 out.go:358] Setting ErrFile to fd 2...
	I0919 18:43:07.460483  293537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0919 18:43:07.460563  293537 out.go:270] X Problems detected in kubelet:
	W0919 18:43:07.460581  293537 out.go:270]   Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.016666    1465 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:43:07.460590  293537 out.go:270]   Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.033708    1465 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-971880" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-971880' and this object
	W0919 18:43:07.460610  293537 out.go:270]   Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.033776    1465 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	W0919 18:43:07.460616  293537 out.go:270]   Sep 19 18:41:33 addons-971880 kubelet[1465]: W0919 18:41:33.078574    1465 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-971880" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-971880' and this object
	W0919 18:43:07.460626  293537 out.go:270]   Sep 19 18:41:33 addons-971880 kubelet[1465]: E0919 18:41:33.078623    1465 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-971880\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-971880' and this object" logger="UnhandledError"
	I0919 18:43:07.460633  293537 out.go:358] Setting ErrFile to fd 2...
	I0919 18:43:07.460640  293537 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:43:17.474173  293537 system_pods.go:59] 18 kube-system pods found
	I0919 18:43:17.474216  293537 system_pods.go:61] "coredns-7c65d6cfc9-lzshk" [fa76a4be-7a2f-482a-bb9a-f8b9caf2eed4] Running
	I0919 18:43:17.474223  293537 system_pods.go:61] "csi-hostpath-attacher-0" [e4afe744-fcb9-4ef1-83bc-7da6426a009e] Running
	I0919 18:43:17.474228  293537 system_pods.go:61] "csi-hostpath-resizer-0" [35bf4614-c53f-4a64-ba65-c4d2585a4618] Running
	I0919 18:43:17.474254  293537 system_pods.go:61] "csi-hostpathplugin-f4lvd" [c4e2104a-24a2-4d2b-982f-90c367e0f6f5] Running
	I0919 18:43:17.474265  293537 system_pods.go:61] "etcd-addons-971880" [48b082ac-da22-4582-a616-c7fc480b4ab7] Running
	I0919 18:43:17.474269  293537 system_pods.go:61] "kindnet-k2v8g" [0e23ba7f-3c08-474e-a24d-b217d7ad4fff] Running
	I0919 18:43:17.474273  293537 system_pods.go:61] "kube-apiserver-addons-971880" [2b208d09-ed22-4147-a82a-c346c0576a72] Running
	I0919 18:43:17.474278  293537 system_pods.go:61] "kube-controller-manager-addons-971880" [ef368deb-bcae-4de9-9cc2-02cce640782e] Running
	I0919 18:43:17.474288  293537 system_pods.go:61] "kube-ingress-dns-minikube" [afb5e949-2f5b-462a-89a2-809679640b8d] Running
	I0919 18:43:17.474292  293537 system_pods.go:61] "kube-proxy-pf8wk" [3daa047c-3145-421d-b44a-a991266a805e] Running
	I0919 18:43:17.474301  293537 system_pods.go:61] "kube-scheduler-addons-971880" [147489f6-4fd9-4831-8bc8-c03b9624170f] Running
	I0919 18:43:17.474305  293537 system_pods.go:61] "metrics-server-84c5f94fbc-jrbzm" [4dcd9c96-80a7-42f2-86ca-69d052a20c31] Running
	I0919 18:43:17.474312  293537 system_pods.go:61] "nvidia-device-plugin-daemonset-6b6sb" [d2508241-1d3e-43e2-b635-ccd577d441ef] Running
	I0919 18:43:17.474316  293537 system_pods.go:61] "registry-66c9cd494c-zjfvp" [95228612-f951-44f9-ac40-a54760497790] Running
	I0919 18:43:17.474331  293537 system_pods.go:61] "registry-proxy-mn6mx" [384cadea-3e7f-4b57-8edb-f51b9f4dde24] Running
	I0919 18:43:17.474337  293537 system_pods.go:61] "snapshot-controller-56fcc65765-hqvnz" [e4649166-f708-403f-b875-0777c7dc2409] Running
	I0919 18:43:17.474340  293537 system_pods.go:61] "snapshot-controller-56fcc65765-jbx4b" [08c94f64-7807-4552-b598-624dd9ca5fad] Running
	I0919 18:43:17.474344  293537 system_pods.go:61] "storage-provisioner" [a5319758-9b7e-4434-b3bb-2abf6f5f5a05] Running
	I0919 18:43:17.474350  293537 system_pods.go:74] duration metric: took 11.086673196s to wait for pod list to return data ...
	I0919 18:43:17.474360  293537 default_sa.go:34] waiting for default service account to be created ...
	I0919 18:43:17.476991  293537 default_sa.go:45] found service account: "default"
	I0919 18:43:17.477019  293537 default_sa.go:55] duration metric: took 2.651822ms for default service account to be created ...
	I0919 18:43:17.477031  293537 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 18:43:17.487749  293537 system_pods.go:86] 18 kube-system pods found
	I0919 18:43:17.487788  293537 system_pods.go:89] "coredns-7c65d6cfc9-lzshk" [fa76a4be-7a2f-482a-bb9a-f8b9caf2eed4] Running
	I0919 18:43:17.487838  293537 system_pods.go:89] "csi-hostpath-attacher-0" [e4afe744-fcb9-4ef1-83bc-7da6426a009e] Running
	I0919 18:43:17.487852  293537 system_pods.go:89] "csi-hostpath-resizer-0" [35bf4614-c53f-4a64-ba65-c4d2585a4618] Running
	I0919 18:43:17.487857  293537 system_pods.go:89] "csi-hostpathplugin-f4lvd" [c4e2104a-24a2-4d2b-982f-90c367e0f6f5] Running
	I0919 18:43:17.487865  293537 system_pods.go:89] "etcd-addons-971880" [48b082ac-da22-4582-a616-c7fc480b4ab7] Running
	I0919 18:43:17.487875  293537 system_pods.go:89] "kindnet-k2v8g" [0e23ba7f-3c08-474e-a24d-b217d7ad4fff] Running
	I0919 18:43:17.487881  293537 system_pods.go:89] "kube-apiserver-addons-971880" [2b208d09-ed22-4147-a82a-c346c0576a72] Running
	I0919 18:43:17.487889  293537 system_pods.go:89] "kube-controller-manager-addons-971880" [ef368deb-bcae-4de9-9cc2-02cce640782e] Running
	I0919 18:43:17.487896  293537 system_pods.go:89] "kube-ingress-dns-minikube" [afb5e949-2f5b-462a-89a2-809679640b8d] Running
	I0919 18:43:17.487914  293537 system_pods.go:89] "kube-proxy-pf8wk" [3daa047c-3145-421d-b44a-a991266a805e] Running
	I0919 18:43:17.487927  293537 system_pods.go:89] "kube-scheduler-addons-971880" [147489f6-4fd9-4831-8bc8-c03b9624170f] Running
	I0919 18:43:17.487932  293537 system_pods.go:89] "metrics-server-84c5f94fbc-jrbzm" [4dcd9c96-80a7-42f2-86ca-69d052a20c31] Running
	I0919 18:43:17.487948  293537 system_pods.go:89] "nvidia-device-plugin-daemonset-6b6sb" [d2508241-1d3e-43e2-b635-ccd577d441ef] Running
	I0919 18:43:17.487959  293537 system_pods.go:89] "registry-66c9cd494c-zjfvp" [95228612-f951-44f9-ac40-a54760497790] Running
	I0919 18:43:17.487964  293537 system_pods.go:89] "registry-proxy-mn6mx" [384cadea-3e7f-4b57-8edb-f51b9f4dde24] Running
	I0919 18:43:17.487969  293537 system_pods.go:89] "snapshot-controller-56fcc65765-hqvnz" [e4649166-f708-403f-b875-0777c7dc2409] Running
	I0919 18:43:17.487975  293537 system_pods.go:89] "snapshot-controller-56fcc65765-jbx4b" [08c94f64-7807-4552-b598-624dd9ca5fad] Running
	I0919 18:43:17.487979  293537 system_pods.go:89] "storage-provisioner" [a5319758-9b7e-4434-b3bb-2abf6f5f5a05] Running
	I0919 18:43:17.487987  293537 system_pods.go:126] duration metric: took 10.951104ms to wait for k8s-apps to be running ...
	I0919 18:43:17.488020  293537 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 18:43:17.488142  293537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 18:43:17.501314  293537 system_svc.go:56] duration metric: took 13.293118ms WaitForService to wait for kubelet
	I0919 18:43:17.501349  293537 kubeadm.go:582] duration metric: took 2m29.407710689s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 18:43:17.501369  293537 node_conditions.go:102] verifying NodePressure condition ...
	I0919 18:43:17.504944  293537 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0919 18:43:17.504984  293537 node_conditions.go:123] node cpu capacity is 2
	I0919 18:43:17.504998  293537 node_conditions.go:105] duration metric: took 3.620313ms to run NodePressure ...
	I0919 18:43:17.505009  293537 start.go:241] waiting for startup goroutines ...
	I0919 18:43:17.505016  293537 start.go:246] waiting for cluster config update ...
	I0919 18:43:17.505032  293537 start.go:255] writing updated cluster config ...
	I0919 18:43:17.505333  293537 ssh_runner.go:195] Run: rm -f paused
	I0919 18:43:17.844712  293537 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0919 18:43:17.848004  293537 out.go:177] * Done! kubectl is now configured to use "addons-971880" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 19 18:56:44 addons-971880 crio[951]: time="2024-09-19 18:56:44.348562620Z" level=info msg="Stopped pod sandbox (already stopped): 2d5041a3b3e10a847c967d43baca0613f1e98621e49f28d97fa4f8637f7865f5" id=47caf94e-5e2d-48ac-b2f1-930dcb992f3e name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 19 18:56:44 addons-971880 crio[951]: time="2024-09-19 18:56:44.348939793Z" level=info msg="Removing pod sandbox: 2d5041a3b3e10a847c967d43baca0613f1e98621e49f28d97fa4f8637f7865f5" id=e920d4c1-a539-40be-871a-4f9a6230c08c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 19 18:56:44 addons-971880 crio[951]: time="2024-09-19 18:56:44.357835326Z" level=info msg="Removed pod sandbox: 2d5041a3b3e10a847c967d43baca0613f1e98621e49f28d97fa4f8637f7865f5" id=e920d4c1-a539-40be-871a-4f9a6230c08c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 19 18:56:44 addons-971880 crio[951]: time="2024-09-19 18:56:44.358388605Z" level=info msg="Stopping pod sandbox: 0944b05c8bcecf3d9108dd6c82365ba118c8fc393f9519952a230c7ca8155666" id=91d374b8-cbec-4ecd-81ba-51b7846047de name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 19 18:56:44 addons-971880 crio[951]: time="2024-09-19 18:56:44.358502829Z" level=info msg="Stopped pod sandbox (already stopped): 0944b05c8bcecf3d9108dd6c82365ba118c8fc393f9519952a230c7ca8155666" id=91d374b8-cbec-4ecd-81ba-51b7846047de name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 19 18:56:44 addons-971880 crio[951]: time="2024-09-19 18:56:44.358835161Z" level=info msg="Removing pod sandbox: 0944b05c8bcecf3d9108dd6c82365ba118c8fc393f9519952a230c7ca8155666" id=10235f22-9215-4675-9f2f-099955eab8d2 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 19 18:56:44 addons-971880 crio[951]: time="2024-09-19 18:56:44.367490743Z" level=info msg="Removed pod sandbox: 0944b05c8bcecf3d9108dd6c82365ba118c8fc393f9519952a230c7ca8155666" id=10235f22-9215-4675-9f2f-099955eab8d2 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 19 18:56:44 addons-971880 crio[951]: time="2024-09-19 18:56:44.833111837Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=959147e7-bd40-4490-aa3b-315a958281bd name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:56:44 addons-971880 crio[951]: time="2024-09-19 18:56:44.833370750Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=959147e7-bd40-4490-aa3b-315a958281bd name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:56:59 addons-971880 crio[951]: time="2024-09-19 18:56:59.832431035Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=600ffbeb-f0b0-4808-8115-9f512ea3b058 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:56:59 addons-971880 crio[951]: time="2024-09-19 18:56:59.832669911Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=600ffbeb-f0b0-4808-8115-9f512ea3b058 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:57:14 addons-971880 crio[951]: time="2024-09-19 18:57:14.832445115Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0292527c-c3fb-4d14-aee2-ac49eb359179 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:57:14 addons-971880 crio[951]: time="2024-09-19 18:57:14.832688503Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=0292527c-c3fb-4d14-aee2-ac49eb359179 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:57:28 addons-971880 crio[951]: time="2024-09-19 18:57:28.833276223Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c279d4a4-769a-4060-80d0-f4f4ecd78cb7 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:57:28 addons-971880 crio[951]: time="2024-09-19 18:57:28.833514065Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=c279d4a4-769a-4060-80d0-f4f4ecd78cb7 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:57:40 addons-971880 crio[951]: time="2024-09-19 18:57:40.832949792Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=739d9d3a-fda8-4241-8545-6068cbcfb5df name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:57:40 addons-971880 crio[951]: time="2024-09-19 18:57:40.833185459Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=739d9d3a-fda8-4241-8545-6068cbcfb5df name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:57:43 addons-971880 crio[951]: time="2024-09-19 18:57:43.724957133Z" level=info msg="Stopping container: 2211b84a8bcc0c8477aa66c7a69101c218a238b8fc339b308549da06341d6d37 (timeout: 30s)" id=5e23d49d-e648-4830-af42-7a8f543c0784 name=/runtime.v1.RuntimeService/StopContainer
	Sep 19 18:57:44 addons-971880 crio[951]: time="2024-09-19 18:57:44.890066987Z" level=info msg="Stopped container 2211b84a8bcc0c8477aa66c7a69101c218a238b8fc339b308549da06341d6d37: kube-system/metrics-server-84c5f94fbc-jrbzm/metrics-server" id=5e23d49d-e648-4830-af42-7a8f543c0784 name=/runtime.v1.RuntimeService/StopContainer
	Sep 19 18:57:44 addons-971880 crio[951]: time="2024-09-19 18:57:44.890805391Z" level=info msg="Stopping pod sandbox: 022f53b7544e5b06a1cda610d6f5d048c165195969ab9ddabdaf6382d5cda63f" id=6f30e745-36db-49c8-8ed2-741100cdb58a name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 19 18:57:44 addons-971880 crio[951]: time="2024-09-19 18:57:44.891022671Z" level=info msg="Got pod network &{Name:metrics-server-84c5f94fbc-jrbzm Namespace:kube-system ID:022f53b7544e5b06a1cda610d6f5d048c165195969ab9ddabdaf6382d5cda63f UID:4dcd9c96-80a7-42f2-86ca-69d052a20c31 NetNS:/var/run/netns/67b18d60-a81d-45fd-8137-acd2e758c825 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 19 18:57:44 addons-971880 crio[951]: time="2024-09-19 18:57:44.891178609Z" level=info msg="Deleting pod kube-system_metrics-server-84c5f94fbc-jrbzm from CNI network \"kindnet\" (type=ptp)"
	Sep 19 18:57:44 addons-971880 crio[951]: time="2024-09-19 18:57:44.938239197Z" level=info msg="Stopped pod sandbox: 022f53b7544e5b06a1cda610d6f5d048c165195969ab9ddabdaf6382d5cda63f" id=6f30e745-36db-49c8-8ed2-741100cdb58a name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 19 18:57:44 addons-971880 crio[951]: time="2024-09-19 18:57:44.996385560Z" level=info msg="Removing container: 2211b84a8bcc0c8477aa66c7a69101c218a238b8fc339b308549da06341d6d37" id=d28f998a-9e1a-41cd-8360-2d2884cf82bf name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 19 18:57:45 addons-971880 crio[951]: time="2024-09-19 18:57:45.024483045Z" level=info msg="Removed container 2211b84a8bcc0c8477aa66c7a69101c218a238b8fc339b308549da06341d6d37: kube-system/metrics-server-84c5f94fbc-jrbzm/metrics-server" id=d28f998a-9e1a-41cd-8360-2d2884cf82bf name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2d8a29047aa75       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6              2 minutes ago       Running             hello-world-app           0                   ee31dc802b569       hello-world-app-55bf9c44b4-qvhwn
	cc1cce7cf558d       docker.io/library/nginx@sha256:19db381c08a95b2040d5637a65c7a59af6c2f21444b0c8730505280a0255fb53                    4 minutes ago       Running             nginx                     0                   ea55f29e4f901       nginx
	7e2229737603a       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:a40e1a121ee367d1712ac3a54ec9c38c405a65dde923c98e5fa6368fa82c4b69       15 minutes ago      Running             gcp-auth                  0                   01e59bcb2da91       gcp-auth-89d5ffd79-8f6t2
	e4ece102ee198       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98   15 minutes ago      Running             local-path-provisioner    0                   567fd5373e217       local-path-provisioner-86d989889c-s9p2l
	645c6e1070b57       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                   16 minutes ago      Running             storage-provisioner       0                   61ec5f92f3e97       storage-provisioner
	c57cc379e1c9a       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                   16 minutes ago      Running             coredns                   0                   2fb9e3187c953       coredns-7c65d6cfc9-lzshk
	dc4aa79f1b326       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d                                                   16 minutes ago      Running             kube-proxy                0                   b43f35ceba531       kube-proxy-pf8wk
	dcda5994fb9da       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51                                                   16 minutes ago      Running             kindnet-cni               0                   874829284dbe9       kindnet-k2v8g
	4e8ba4e202807       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e                                                   17 minutes ago      Running             kube-controller-manager   0                   7ee5f4b8e79eb       kube-controller-manager-addons-971880
	d599c639765e1       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d                                                   17 minutes ago      Running             kube-scheduler            0                   a0d73f380837d       kube-scheduler-addons-971880
	a6739fa07ff39       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853                                                   17 minutes ago      Running             kube-apiserver            0                   92e7a9cf57f7c       kube-apiserver-addons-971880
	1a7797ceebe32       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                   17 minutes ago      Running             etcd                      0                   0a51e9c6a88a2       etcd-addons-971880
	
	
	==> coredns [c57cc379e1c9a8e90a13a5c1580cdc3da69b0f13cd6bd3c34135fb0ade3a0cf4] <==
	[INFO] 10.244.0.15:34202 - 56364 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000210347s
	[INFO] 10.244.0.15:58149 - 38892 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002341045s
	[INFO] 10.244.0.15:58149 - 27857 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002397331s
	[INFO] 10.244.0.15:49676 - 34537 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000107306s
	[INFO] 10.244.0.15:49676 - 13548 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000157021s
	[INFO] 10.244.0.15:57838 - 45202 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00012402s
	[INFO] 10.244.0.15:57838 - 669 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000179569s
	[INFO] 10.244.0.15:51630 - 63490 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000056164s
	[INFO] 10.244.0.15:37480 - 42395 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000051873s
	[INFO] 10.244.0.15:37480 - 26265 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000048607s
	[INFO] 10.244.0.15:51630 - 21823 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000084332s
	[INFO] 10.244.0.15:55956 - 23539 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001275642s
	[INFO] 10.244.0.15:55956 - 9713 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001339642s
	[INFO] 10.244.0.15:54413 - 50779 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000067175s
	[INFO] 10.244.0.15:54413 - 3672 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000064312s
	[INFO] 10.244.0.20:41195 - 28456 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00097449s
	[INFO] 10.244.0.20:38142 - 31604 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001120663s
	[INFO] 10.244.0.20:49823 - 61218 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000160804s
	[INFO] 10.244.0.20:46939 - 4524 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000127639s
	[INFO] 10.244.0.20:36103 - 53599 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00010249s
	[INFO] 10.244.0.20:55932 - 17378 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000129329s
	[INFO] 10.244.0.20:58542 - 47562 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002373504s
	[INFO] 10.244.0.20:41076 - 61778 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002174587s
	[INFO] 10.244.0.20:51892 - 37411 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001802296s
	[INFO] 10.244.0.20:53343 - 52840 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001659954s
	
	
	==> describe nodes <==
	Name:               addons-971880
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-971880
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=addons-971880
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_19T18_40_44_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-971880
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 18:40:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-971880
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 18:57:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 19 Sep 2024 18:56:23 +0000   Thu, 19 Sep 2024 18:40:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 19 Sep 2024 18:56:23 +0000   Thu, 19 Sep 2024 18:40:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 19 Sep 2024 18:56:23 +0000   Thu, 19 Sep 2024 18:40:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 19 Sep 2024 18:56:23 +0000   Thu, 19 Sep 2024 18:41:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-971880
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 bd7bc352662e4b16b74f8eda34921dfa
	  System UUID:                760732df-5c49-4c7a-baae-21e5ed371ca8
	  Boot ID:                    52db61fe-4049-4d60-8bc0-73f7fa38c59e
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  default                     hello-world-app-55bf9c44b4-qvhwn           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m39s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  gcp-auth                    gcp-auth-89d5ffd79-8f6t2                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 coredns-7c65d6cfc9-lzshk                   100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     16m
	  kube-system                 etcd-addons-971880                         100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         17m
	  kube-system                 kindnet-k2v8g                              100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      16m
	  kube-system                 kube-apiserver-addons-971880               250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-addons-971880      200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-pf8wk                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-addons-971880               100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  local-path-storage          local-path-provisioner-86d989889c-s9p2l    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 16m                kube-proxy       
	  Normal   Starting                 17m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 17m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node addons-971880 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node addons-971880 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m (x7 over 17m)  kubelet          Node addons-971880 status is now: NodeHasSufficientPID
	  Normal   Starting                 17m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 17m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  17m                kubelet          Node addons-971880 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m                kubelet          Node addons-971880 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m                kubelet          Node addons-971880 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           16m                node-controller  Node addons-971880 event: Registered Node addons-971880 in Controller
	  Normal   NodeReady                16m                kubelet          Node addons-971880 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep19 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014930] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.480178] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.743811] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.535974] kauditd_printk_skb: 36 callbacks suppressed
	[Sep19 17:29] hrtimer: interrupt took 7222366 ns
	[Sep19 17:52] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [1a7797ceebe32b244c5f1a224638afd9f24c176c6cc7d597addad656dde2d48b] <==
	{"level":"info","ts":"2024-09-19T18:40:38.476754Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-19T18:40:38.477130Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-19T18:40:38.477659Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-19T18:40:38.477789Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-19T18:40:38.477860Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-19T18:40:38.477887Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-19T18:40:38.478195Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"warn","ts":"2024-09-19T18:40:49.175324Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"232.946086ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2024-09-19T18:40:49.175485Z","caller":"traceutil/trace.go:171","msg":"trace[830918283] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:342; }","duration":"233.225348ms","start":"2024-09-19T18:40:48.942248Z","end":"2024-09-19T18:40:49.175473Z","steps":["trace[830918283] 'range keys from in-memory index tree'  (duration: 232.868524ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T18:40:51.797488Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.274026ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kube-proxy\" ","response":"range_response_count:1 size:2878"}
	{"level":"info","ts":"2024-09-19T18:40:51.797733Z","caller":"traceutil/trace.go:171","msg":"trace[1182424122] range","detail":"{range_begin:/registry/daemonsets/kube-system/kube-proxy; range_end:; response_count:1; response_revision:370; }","duration":"109.630686ms","start":"2024-09-19T18:40:51.688089Z","end":"2024-09-19T18:40:51.797719Z","steps":["trace[1182424122] 'range keys from in-memory index tree'  (duration: 108.949628ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T18:40:51.921159Z","caller":"traceutil/trace.go:171","msg":"trace[533958715] linearizableReadLoop","detail":"{readStateIndex:381; appliedIndex:380; }","duration":"112.097711ms","start":"2024-09-19T18:40:51.809047Z","end":"2024-09-19T18:40:51.921145Z","steps":["trace[533958715] 'read index received'  (duration: 41.359334ms)","trace[533958715] 'applied index is now lower than readState.Index'  (duration: 70.737803ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-19T18:40:51.921521Z","caller":"traceutil/trace.go:171","msg":"trace[119087165] transaction","detail":"{read_only:false; response_revision:371; number_of_response:1; }","duration":"208.072803ms","start":"2024-09-19T18:40:51.713438Z","end":"2024-09-19T18:40:51.921511Z","steps":["trace[119087165] 'process raft request'  (duration: 207.578674ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T18:40:51.934289Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.615898ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-09-19T18:40:51.934428Z","caller":"traceutil/trace.go:171","msg":"trace[1898066912] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:371; }","duration":"125.3687ms","start":"2024-09-19T18:40:51.809041Z","end":"2024-09-19T18:40:51.934410Z","steps":["trace[1898066912] 'agreement among raft nodes before linearized reading'  (duration: 112.594212ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T18:40:51.947086Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.812529ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-09-19T18:40:51.947245Z","caller":"traceutil/trace.go:171","msg":"trace[273625168] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:378; }","duration":"137.981505ms","start":"2024-09-19T18:40:51.809251Z","end":"2024-09-19T18:40:51.947233Z","steps":["trace[273625168] 'agreement among raft nodes before linearized reading'  (duration: 137.773891ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T18:40:52.209425Z","caller":"traceutil/trace.go:171","msg":"trace[74668238] transaction","detail":"{read_only:false; response_revision:385; number_of_response:1; }","duration":"112.887175ms","start":"2024-09-19T18:40:52.096520Z","end":"2024-09-19T18:40:52.209407Z","steps":["trace[74668238] 'process raft request'  (duration: 103.152266ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T18:40:52.740865Z","caller":"traceutil/trace.go:171","msg":"trace[748612513] transaction","detail":"{read_only:false; response_revision:430; number_of_response:1; }","duration":"103.493878ms","start":"2024-09-19T18:40:52.637355Z","end":"2024-09-19T18:40:52.740849Z","steps":["trace[748612513] 'process raft request'  (duration: 24.762129ms)","trace[748612513] 'compare'  (duration: 78.348669ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-19T18:50:38.544916Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1551}
	{"level":"info","ts":"2024-09-19T18:50:38.573319Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1551,"took":"27.857921ms","hash":3037137597,"current-db-size-bytes":6537216,"current-db-size":"6.5 MB","current-db-size-in-use-bytes":3432448,"current-db-size-in-use":"3.4 MB"}
	{"level":"info","ts":"2024-09-19T18:50:38.573371Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3037137597,"revision":1551,"compact-revision":-1}
	{"level":"info","ts":"2024-09-19T18:55:38.550159Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1967}
	{"level":"info","ts":"2024-09-19T18:55:38.568419Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1967,"took":"17.652252ms","hash":2438010104,"current-db-size-bytes":6537216,"current-db-size":"6.5 MB","current-db-size-in-use-bytes":4575232,"current-db-size-in-use":"4.6 MB"}
	{"level":"info","ts":"2024-09-19T18:55:38.568476Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2438010104,"revision":1967,"compact-revision":1551}
	
	
	==> gcp-auth [7e2229737603afbb0dacc6d3df819da59af22f172e365f53a2f81a5439c8bcc4] <==
	2024/09/19 18:43:18 Ready to write response ...
	2024/09/19 18:43:18 Ready to marshal response ...
	2024/09/19 18:43:18 Ready to write response ...
	2024/09/19 18:51:32 Ready to marshal response ...
	2024/09/19 18:51:32 Ready to write response ...
	2024/09/19 18:51:38 Ready to marshal response ...
	2024/09/19 18:51:38 Ready to write response ...
	2024/09/19 18:52:02 Ready to marshal response ...
	2024/09/19 18:52:02 Ready to write response ...
	2024/09/19 18:52:48 Ready to marshal response ...
	2024/09/19 18:52:48 Ready to write response ...
	2024/09/19 18:55:06 Ready to marshal response ...
	2024/09/19 18:55:06 Ready to write response ...
	2024/09/19 18:55:19 Ready to marshal response ...
	2024/09/19 18:55:19 Ready to write response ...
	2024/09/19 18:55:19 Ready to marshal response ...
	2024/09/19 18:55:19 Ready to write response ...
	2024/09/19 18:55:28 Ready to marshal response ...
	2024/09/19 18:55:28 Ready to write response ...
	2024/09/19 18:55:53 Ready to marshal response ...
	2024/09/19 18:55:53 Ready to write response ...
	2024/09/19 18:55:53 Ready to marshal response ...
	2024/09/19 18:55:53 Ready to write response ...
	2024/09/19 18:55:53 Ready to marshal response ...
	2024/09/19 18:55:53 Ready to write response ...
	
	
	==> kernel <==
	 18:57:45 up  2:39,  0 users,  load average: 0.76, 0.68, 0.80
	Linux addons-971880 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [dcda5994fb9dad3bdb5086f8926c508213459cae9deb54923f4dffd2852f1b82] <==
	I0919 18:55:42.714093       1 main.go:299] handling current node
	I0919 18:55:52.713843       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:55:52.713957       1 main.go:299] handling current node
	I0919 18:56:02.715910       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:56:02.715947       1 main.go:299] handling current node
	I0919 18:56:12.714613       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:56:12.714655       1 main.go:299] handling current node
	I0919 18:56:22.714520       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:56:22.714645       1 main.go:299] handling current node
	I0919 18:56:32.721088       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:56:32.721128       1 main.go:299] handling current node
	I0919 18:56:42.713645       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:56:42.713697       1 main.go:299] handling current node
	I0919 18:56:52.714374       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:56:52.714410       1 main.go:299] handling current node
	I0919 18:57:02.719131       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:57:02.719168       1 main.go:299] handling current node
	I0919 18:57:12.713649       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:57:12.713685       1 main.go:299] handling current node
	I0919 18:57:22.713862       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:57:22.713901       1 main.go:299] handling current node
	I0919 18:57:32.713745       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:57:32.713869       1 main.go:299] handling current node
	I0919 18:57:42.713814       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:57:42.713929       1 main.go:299] handling current node
	
	
	==> kube-apiserver [a6739fa07ff39070e523d4cee920e861b82fa4bd2f9be9a5c18fec0b8df87a99] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0919 18:42:43.683381       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0919 18:51:49.284184       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0919 18:51:51.111893       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I0919 18:52:18.872919       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 18:52:18.873058       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0919 18:52:18.898841       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 18:52:18.898895       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0919 18:52:18.939986       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 18:52:18.940228       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0919 18:52:18.978013       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 18:52:18.978127       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0919 18:52:19.011407       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 18:52:19.011546       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0919 18:52:19.978902       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0919 18:52:20.012417       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0919 18:52:20.066661       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0919 18:52:42.462108       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0919 18:52:43.587477       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0919 18:52:48.074937       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0919 18:52:48.382732       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.199.165"}
	I0919 18:55:06.732560       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.4.85"}
	I0919 18:55:53.087159       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.106.114"}
	I0919 18:57:44.703485       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [4e8ba4e202807048a02e5bf0c0ced036a2a964bcaf7a91f53bf6080712052341] <==
	W0919 18:55:54.234689       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:55:54.234730       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0919 18:55:56.794550       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="57.534µs"
	I0919 18:55:56.824208       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="8.9574ms"
	I0919 18:55:56.825202       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="40.148µs"
	I0919 18:56:03.706557       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="7.212µs"
	I0919 18:56:13.838089       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	W0919 18:56:14.221443       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:56:14.221559       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0919 18:56:23.736862       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-971880"
	W0919 18:56:26.141289       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:56:26.141407       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:56:36.648913       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:56:36.648960       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:56:36.698865       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:56:36.698913       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:56:55.664986       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:56:55.665035       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:57:09.003656       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:57:09.003708       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:57:14.574289       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:57:14.574334       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:57:36.173624       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:57:36.173668       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0919 18:57:43.711108       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="7.368µs"
	
	
	==> kube-proxy [dc4aa79f1b326b392f0966ca9b9067fb674897d417ae8cf8b7dbef4fe5de203a] <==
	I0919 18:40:52.838027       1 server_linux.go:66] "Using iptables proxy"
	I0919 18:40:53.554142       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0919 18:40:53.554262       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 18:40:53.934955       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 18:40:53.935024       1 server_linux.go:169] "Using iptables Proxier"
	I0919 18:40:53.938053       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 18:40:53.938361       1 server.go:483] "Version info" version="v1.31.1"
	I0919 18:40:53.938589       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 18:40:53.939672       1 config.go:199] "Starting service config controller"
	I0919 18:40:53.939717       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0919 18:40:53.939750       1 config.go:105] "Starting endpoint slice config controller"
	I0919 18:40:53.939765       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0919 18:40:53.940395       1 config.go:328] "Starting node config controller"
	I0919 18:40:53.940414       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0919 18:40:54.042662       1 shared_informer.go:320] Caches are synced for node config
	I0919 18:40:54.056362       1 shared_informer.go:320] Caches are synced for service config
	I0919 18:40:54.056393       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [d599c639765e1953306d0695fea53619d96374405a473fbb100111dab8d56569] <==
	W0919 18:40:41.910847       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0919 18:40:41.910906       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:41.911000       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0919 18:40:41.911041       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:41.911123       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0919 18:40:41.911163       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:41.916391       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0919 18:40:41.916611       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:41.916751       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0919 18:40:41.916806       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:41.916886       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0919 18:40:41.916941       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:41.917018       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0919 18:40:41.917057       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:41.917131       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0919 18:40:41.917171       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:41.917272       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0919 18:40:41.917314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:41.917390       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0919 18:40:41.917429       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:41.917507       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0919 18:40:41.917859       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:40:41.918004       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0919 18:40:41.918067       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0919 18:40:43.005131       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 19 18:56:59 addons-971880 kubelet[1465]: E0919 18:56:59.833252    1465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="4f103fbd-06db-4d16-a162-93cbfb48a68e"
	Sep 19 18:57:04 addons-971880 kubelet[1465]: E0919 18:57:04.203058    1465 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772224202789343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572290,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 18:57:04 addons-971880 kubelet[1465]: E0919 18:57:04.203100    1465 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772224202789343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572290,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 18:57:14 addons-971880 kubelet[1465]: E0919 18:57:14.205791    1465 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772234205560839,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572290,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 18:57:14 addons-971880 kubelet[1465]: E0919 18:57:14.205829    1465 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772234205560839,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572290,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 18:57:14 addons-971880 kubelet[1465]: E0919 18:57:14.832929    1465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="4f103fbd-06db-4d16-a162-93cbfb48a68e"
	Sep 19 18:57:24 addons-971880 kubelet[1465]: E0919 18:57:24.208564    1465 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772244208327493,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572290,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 18:57:24 addons-971880 kubelet[1465]: E0919 18:57:24.208611    1465 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772244208327493,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572290,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 18:57:28 addons-971880 kubelet[1465]: E0919 18:57:28.833910    1465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="4f103fbd-06db-4d16-a162-93cbfb48a68e"
	Sep 19 18:57:34 addons-971880 kubelet[1465]: E0919 18:57:34.211371    1465 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772254211108334,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572290,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 18:57:34 addons-971880 kubelet[1465]: E0919 18:57:34.211413    1465 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772254211108334,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572290,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 18:57:40 addons-971880 kubelet[1465]: E0919 18:57:40.833406    1465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="4f103fbd-06db-4d16-a162-93cbfb48a68e"
	Sep 19 18:57:44 addons-971880 kubelet[1465]: E0919 18:57:44.214251    1465 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772264213956218,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572290,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 18:57:44 addons-971880 kubelet[1465]: E0919 18:57:44.214287    1465 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772264213956218,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:572290,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 18:57:44 addons-971880 kubelet[1465]: I0919 18:57:44.991578    1465 scope.go:117] "RemoveContainer" containerID="2211b84a8bcc0c8477aa66c7a69101c218a238b8fc339b308549da06341d6d37"
	Sep 19 18:57:45 addons-971880 kubelet[1465]: I0919 18:57:45.024836    1465 scope.go:117] "RemoveContainer" containerID="2211b84a8bcc0c8477aa66c7a69101c218a238b8fc339b308549da06341d6d37"
	Sep 19 18:57:45 addons-971880 kubelet[1465]: E0919 18:57:45.025331    1465 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2211b84a8bcc0c8477aa66c7a69101c218a238b8fc339b308549da06341d6d37\": container with ID starting with 2211b84a8bcc0c8477aa66c7a69101c218a238b8fc339b308549da06341d6d37 not found: ID does not exist" containerID="2211b84a8bcc0c8477aa66c7a69101c218a238b8fc339b308549da06341d6d37"
	Sep 19 18:57:45 addons-971880 kubelet[1465]: I0919 18:57:45.027904    1465 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2211b84a8bcc0c8477aa66c7a69101c218a238b8fc339b308549da06341d6d37"} err="failed to get container status \"2211b84a8bcc0c8477aa66c7a69101c218a238b8fc339b308549da06341d6d37\": rpc error: code = NotFound desc = could not find container \"2211b84a8bcc0c8477aa66c7a69101c218a238b8fc339b308549da06341d6d37\": container with ID starting with 2211b84a8bcc0c8477aa66c7a69101c218a238b8fc339b308549da06341d6d37 not found: ID does not exist"
	Sep 19 18:57:45 addons-971880 kubelet[1465]: I0919 18:57:45.028052    1465 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5z2k8\" (UniqueName: \"kubernetes.io/projected/4dcd9c96-80a7-42f2-86ca-69d052a20c31-kube-api-access-5z2k8\") pod \"4dcd9c96-80a7-42f2-86ca-69d052a20c31\" (UID: \"4dcd9c96-80a7-42f2-86ca-69d052a20c31\") "
	Sep 19 18:57:45 addons-971880 kubelet[1465]: I0919 18:57:45.029156    1465 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4dcd9c96-80a7-42f2-86ca-69d052a20c31-tmp-dir\") pod \"4dcd9c96-80a7-42f2-86ca-69d052a20c31\" (UID: \"4dcd9c96-80a7-42f2-86ca-69d052a20c31\") "
	Sep 19 18:57:45 addons-971880 kubelet[1465]: I0919 18:57:45.039852    1465 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4dcd9c96-80a7-42f2-86ca-69d052a20c31-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "4dcd9c96-80a7-42f2-86ca-69d052a20c31" (UID: "4dcd9c96-80a7-42f2-86ca-69d052a20c31"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 19 18:57:45 addons-971880 kubelet[1465]: I0919 18:57:45.041141    1465 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dcd9c96-80a7-42f2-86ca-69d052a20c31-kube-api-access-5z2k8" (OuterVolumeSpecName: "kube-api-access-5z2k8") pod "4dcd9c96-80a7-42f2-86ca-69d052a20c31" (UID: "4dcd9c96-80a7-42f2-86ca-69d052a20c31"). InnerVolumeSpecName "kube-api-access-5z2k8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 19 18:57:45 addons-971880 kubelet[1465]: I0919 18:57:45.133492    1465 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4dcd9c96-80a7-42f2-86ca-69d052a20c31-tmp-dir\") on node \"addons-971880\" DevicePath \"\""
	Sep 19 18:57:45 addons-971880 kubelet[1465]: I0919 18:57:45.133546    1465 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-5z2k8\" (UniqueName: \"kubernetes.io/projected/4dcd9c96-80a7-42f2-86ca-69d052a20c31-kube-api-access-5z2k8\") on node \"addons-971880\" DevicePath \"\""
	Sep 19 18:57:45 addons-971880 kubelet[1465]: I0919 18:57:45.833886    1465 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dcd9c96-80a7-42f2-86ca-69d052a20c31" path="/var/lib/kubelet/pods/4dcd9c96-80a7-42f2-86ca-69d052a20c31/volumes"
	
	
	==> storage-provisioner [645c6e1070b57c423d66af2e3d6e057cece2b42bc10fd145e4e32e7603750853] <==
	I0919 18:41:34.075595       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0919 18:41:34.089415       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0919 18:41:34.089614       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0919 18:41:34.099519       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0919 18:41:34.099789       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-971880_54ef0796-8bc7-4c84-86c5-8ebc4c91b26f!
	I0919 18:41:34.100759       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f411cc94-3279-4140-8a35-80322ca09e0a", APIVersion:"v1", ResourceVersion:"930", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-971880_54ef0796-8bc7-4c84-86c5-8ebc4c91b26f became leader
	I0919 18:41:34.201066       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-971880_54ef0796-8bc7-4c84-86c5-8ebc4c91b26f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-971880 -n addons-971880
helpers_test.go:261: (dbg) Run:  kubectl --context addons-971880 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-971880 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-971880 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-971880/192.168.49.2
	Start Time:       Thu, 19 Sep 2024 18:43:18 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.21
	IPs:
	  IP:  10.244.0.21
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w22nf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-w22nf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  14m                   default-scheduler  Successfully assigned default/busybox to addons-971880
	  Normal   Pulling    12m (x4 over 14m)     kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     12m (x4 over 14m)     kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     12m (x4 over 14m)     kubelet            Error: ErrImagePull
	  Warning  Failed     12m (x6 over 14m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m21s (x43 over 14m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (327.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (128.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-310211 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0919 19:11:23.802396  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/functional-058102/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-310211 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m4.119623995s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:589: expected 3 nodes to be Ready, got 
-- stdout --
	NAME            STATUS     ROLES           AGE     VERSION
	ha-310211       NotReady   control-plane   10m     v1.31.1
	ha-310211-m02   Ready      control-plane   10m     v1.31.1
	ha-310211-m04   Ready      <none>          7m57s   v1.31.1

                                                
                                                
-- /stdout --
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
ha_test.go:597: expected 3 nodes Ready status to be True, got 
-- stdout --
	' Unknown
	 True
	 True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-310211
helpers_test.go:235: (dbg) docker inspect ha-310211:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "366d82d25ff03f8c1294620ed4b590e696c703b69c11f76e2deb36404c718ce0",
	        "Created": "2024-09-19T19:01:54.323575624Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 353692,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-19T19:10:58.230799621Z",
	            "FinishedAt": "2024-09-19T19:10:57.476793514Z"
	        },
	        "Image": "sha256:f8be4f9f9351784955e36c0e64d55ad19451839d9f6d0c057285eb8f9072963b",
	        "ResolvConfPath": "/var/lib/docker/containers/366d82d25ff03f8c1294620ed4b590e696c703b69c11f76e2deb36404c718ce0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/366d82d25ff03f8c1294620ed4b590e696c703b69c11f76e2deb36404c718ce0/hostname",
	        "HostsPath": "/var/lib/docker/containers/366d82d25ff03f8c1294620ed4b590e696c703b69c11f76e2deb36404c718ce0/hosts",
	        "LogPath": "/var/lib/docker/containers/366d82d25ff03f8c1294620ed4b590e696c703b69c11f76e2deb36404c718ce0/366d82d25ff03f8c1294620ed4b590e696c703b69c11f76e2deb36404c718ce0-json.log",
	        "Name": "/ha-310211",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-310211:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-310211",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2a4a73637143d910cd4347aad0d844636bd1d4f7d9b5ca77cc7ff49cd88bd400-init/diff:/var/lib/docker/overlay2/01d9e9e08c815432b8994f686c30467e8ad0d2e87cf6790233377a53c691e8f4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2a4a73637143d910cd4347aad0d844636bd1d4f7d9b5ca77cc7ff49cd88bd400/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2a4a73637143d910cd4347aad0d844636bd1d4f7d9b5ca77cc7ff49cd88bd400/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2a4a73637143d910cd4347aad0d844636bd1d4f7d9b5ca77cc7ff49cd88bd400/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-310211",
	                "Source": "/var/lib/docker/volumes/ha-310211/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-310211",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-310211",
	                "name.minikube.sigs.k8s.io": "ha-310211",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ffc81b756993a97904b8f87bd4c3810b601a879c2d99582a2cff13639a685754",
	            "SandboxKey": "/var/run/docker/netns/ffc81b756993",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33193"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33194"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33197"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33195"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33196"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-310211": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "c7b00de9cd6dd148e528b9dbb4a41a6e7446f6122f8862dac99309923f435efa",
	                    "EndpointID": "4be7274c93734c6fec622cf6dc98e9c280ceaf20358e74a9b829991642d1391b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-310211",
	                        "366d82d25ff0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-310211 -n ha-310211
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ha-310211 logs -n 25: (2.04741592s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-310211 cp ha-310211-m03:/home/docker/cp-test.txt                             | ha-310211 | jenkins | v1.34.0 | 19 Sep 24 19:06 UTC | 19 Sep 24 19:06 UTC |
	|         | ha-310211-m04:/home/docker/cp-test_ha-310211-m03_ha-310211-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-310211 ssh -n                                                                | ha-310211 | jenkins | v1.34.0 | 19 Sep 24 19:06 UTC | 19 Sep 24 19:06 UTC |
	|         | ha-310211-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-310211 ssh -n ha-310211-m04 sudo cat                                         | ha-310211 | jenkins | v1.34.0 | 19 Sep 24 19:06 UTC | 19 Sep 24 19:06 UTC |
	|         | /home/docker/cp-test_ha-310211-m03_ha-310211-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-310211 cp testdata/cp-test.txt                                               | ha-310211 | jenkins | v1.34.0 | 19 Sep 24 19:06 UTC | 19 Sep 24 19:06 UTC |
	|         | ha-310211-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-310211 ssh -n                                                                | ha-310211 | jenkins | v1.34.0 | 19 Sep 24 19:06 UTC | 19 Sep 24 19:06 UTC |
	|         | ha-310211-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-310211 cp ha-310211-m04:/home/docker/cp-test.txt                             | ha-310211 | jenkins | v1.34.0 | 19 Sep 24 19:06 UTC | 19 Sep 24 19:06 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile192546706/001/cp-test_ha-310211-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-310211 ssh -n                                                                | ha-310211 | jenkins | v1.34.0 | 19 Sep 24 19:06 UTC | 19 Sep 24 19:06 UTC |
	|         | ha-310211-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-310211 cp ha-310211-m04:/home/docker/cp-test.txt                             | ha-310211 | jenkins | v1.34.0 | 19 Sep 24 19:06 UTC | 19 Sep 24 19:06 UTC |
	|         | ha-310211:/home/docker/cp-test_ha-310211-m04_ha-310211.txt                      |           |         |         |                     |                     |
	| ssh     | ha-310211 ssh -n                                                                | ha-310211 | jenkins | v1.34.0 | 19 Sep 24 19:06 UTC | 19 Sep 24 19:06 UTC |
	|         | ha-310211-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-310211 ssh -n ha-310211 sudo cat                                             | ha-310211 | jenkins | v1.34.0 | 19 Sep 24 19:06 UTC | 19 Sep 24 19:06 UTC |
	|         | /home/docker/cp-test_ha-310211-m04_ha-310211.txt                                |           |         |         |                     |                     |
	| cp      | ha-310211 cp ha-310211-m04:/home/docker/cp-test.txt                             | ha-310211 | jenkins | v1.34.0 | 19 Sep 24 19:06 UTC | 19 Sep 24 19:06 UTC |
	|         | ha-310211-m02:/home/docker/cp-test_ha-310211-m04_ha-310211-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-310211 ssh -n                                                                | ha-310211 | jenkins | v1.34.0 | 19 Sep 24 19:06 UTC | 19 Sep 24 19:06 UTC |
	|         | ha-310211-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-310211 ssh -n ha-310211-m02 sudo cat                                         | ha-310211 | jenkins | v1.34.0 | 19 Sep 24 19:06 UTC | 19 Sep 24 19:06 UTC |
	|         | /home/docker/cp-test_ha-310211-m04_ha-310211-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-310211 cp ha-310211-m04:/home/docker/cp-test.txt                             | ha-310211 | jenkins | v1.34.0 | 19 Sep 24 19:06 UTC | 19 Sep 24 19:06 UTC |
	|         | ha-310211-m03:/home/docker/cp-test_ha-310211-m04_ha-310211-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-310211 ssh -n                                                                | ha-310211 | jenkins | v1.34.0 | 19 Sep 24 19:06 UTC | 19 Sep 24 19:06 UTC |
	|         | ha-310211-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-310211 ssh -n ha-310211-m03 sudo cat                                         | ha-310211 | jenkins | v1.34.0 | 19 Sep 24 19:06 UTC | 19 Sep 24 19:06 UTC |
	|         | /home/docker/cp-test_ha-310211-m04_ha-310211-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-310211 node stop m02 -v=7                                                    | ha-310211 | jenkins | v1.34.0 | 19 Sep 24 19:06 UTC | 19 Sep 24 19:06 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-310211 node start m02 -v=7                                                   | ha-310211 | jenkins | v1.34.0 | 19 Sep 24 19:06 UTC | 19 Sep 24 19:06 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-310211 -v=7                                                          | ha-310211 | jenkins | v1.34.0 | 19 Sep 24 19:06 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-310211 -v=7                                                               | ha-310211 | jenkins | v1.34.0 | 19 Sep 24 19:06 UTC | 19 Sep 24 19:07 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-310211 --wait=true -v=7                                                   | ha-310211 | jenkins | v1.34.0 | 19 Sep 24 19:07 UTC | 19 Sep 24 19:10 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-310211                                                               | ha-310211 | jenkins | v1.34.0 | 19 Sep 24 19:10 UTC |                     |
	| node    | ha-310211 node delete m03 -v=7                                                  | ha-310211 | jenkins | v1.34.0 | 19 Sep 24 19:10 UTC | 19 Sep 24 19:10 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | ha-310211 stop -v=7                                                             | ha-310211 | jenkins | v1.34.0 | 19 Sep 24 19:10 UTC | 19 Sep 24 19:10 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-310211 --wait=true                                                        | ha-310211 | jenkins | v1.34.0 | 19 Sep 24 19:10 UTC | 19 Sep 24 19:13 UTC |
	|         | -v=7 --alsologtostderr                                                          |           |         |         |                     |                     |
	|         | --driver=docker                                                                 |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                        |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/19 19:10:57
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 19:10:57.897822  353502 out.go:345] Setting OutFile to fd 1 ...
	I0919 19:10:57.898034  353502 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:10:57.898063  353502 out.go:358] Setting ErrFile to fd 2...
	I0919 19:10:57.898083  353502 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:10:57.898403  353502 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-287261/.minikube/bin
	I0919 19:10:57.898832  353502 out.go:352] Setting JSON to false
	I0919 19:10:57.899815  353502 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10390,"bootTime":1726762668,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0919 19:10:57.899916  353502 start.go:139] virtualization:  
	I0919 19:10:57.902755  353502 out.go:177] * [ha-310211] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0919 19:10:57.905299  353502 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 19:10:57.905421  353502 notify.go:220] Checking for updates...
	I0919 19:10:57.909598  353502 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 19:10:57.911262  353502 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19664-287261/kubeconfig
	I0919 19:10:57.912934  353502 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-287261/.minikube
	I0919 19:10:57.914768  353502 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0919 19:10:57.916712  353502 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 19:10:57.919368  353502 config.go:182] Loaded profile config "ha-310211": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:10:57.919898  353502 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 19:10:57.945770  353502 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0919 19:10:57.945925  353502 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 19:10:57.995994  353502 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:1 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:51 SystemTime:2024-09-19 19:10:57.986808057 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0919 19:10:57.996131  353502 docker.go:318] overlay module found
	I0919 19:10:58.006656  353502 out.go:177] * Using the docker driver based on existing profile
	I0919 19:10:58.009009  353502 start.go:297] selected driver: docker
	I0919 19:10:58.009041  353502 start.go:901] validating driver "docker" against &{Name:ha-310211 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-310211 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kub
evirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 19:10:58.009236  353502 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 19:10:58.009368  353502 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 19:10:58.067134  353502 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:1 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:45 OomKillDisable:true NGoroutines:51 SystemTime:2024-09-19 19:10:58.057471137 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0919 19:10:58.067631  353502 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 19:10:58.067662  353502 cni.go:84] Creating CNI manager for ""
	I0919 19:10:58.067719  353502 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0919 19:10:58.067771  353502 start.go:340] cluster config:
	{Name:ha-310211 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-310211 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device
-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval
:1m0s}
	I0919 19:10:58.070418  353502 out.go:177] * Starting "ha-310211" primary control-plane node in "ha-310211" cluster
	I0919 19:10:58.072440  353502 cache.go:121] Beginning downloading kic base image for docker with crio
	I0919 19:10:58.074664  353502 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0919 19:10:58.076435  353502 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 19:10:58.076509  353502 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19664-287261/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I0919 19:10:58.076524  353502 cache.go:56] Caching tarball of preloaded images
	I0919 19:10:58.076547  353502 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0919 19:10:58.076638  353502 preload.go:172] Found /home/jenkins/minikube-integration/19664-287261/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0919 19:10:58.076650  353502 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0919 19:10:58.076791  353502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211/config.json ...
	I0919 19:10:58.095221  353502 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon, skipping pull
	I0919 19:10:58.095246  353502 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in daemon, skipping load
	I0919 19:10:58.095265  353502 cache.go:194] Successfully downloaded all kic artifacts
	I0919 19:10:58.095290  353502 start.go:360] acquireMachinesLock for ha-310211: {Name:mk1dfea3c9b34ffa6ecdb1a4e493d3ae8b5e1d0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 19:10:58.095353  353502 start.go:364] duration metric: took 40.525µs to acquireMachinesLock for "ha-310211"
	I0919 19:10:58.095379  353502 start.go:96] Skipping create...Using existing machine configuration
	I0919 19:10:58.095389  353502 fix.go:54] fixHost starting: 
	I0919 19:10:58.095658  353502 cli_runner.go:164] Run: docker container inspect ha-310211 --format={{.State.Status}}
	I0919 19:10:58.111974  353502 fix.go:112] recreateIfNeeded on ha-310211: state=Stopped err=<nil>
	W0919 19:10:58.112006  353502 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 19:10:58.114627  353502 out.go:177] * Restarting existing docker container for "ha-310211" ...
	I0919 19:10:58.116577  353502 cli_runner.go:164] Run: docker start ha-310211
	I0919 19:10:58.406910  353502 cli_runner.go:164] Run: docker container inspect ha-310211 --format={{.State.Status}}
	I0919 19:10:58.430725  353502 kic.go:430] container "ha-310211" state is running.
	I0919 19:10:58.431122  353502 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-310211
	I0919 19:10:58.455457  353502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211/config.json ...
	I0919 19:10:58.455880  353502 machine.go:93] provisionDockerMachine start ...
	I0919 19:10:58.455949  353502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-310211
	I0919 19:10:58.478888  353502 main.go:141] libmachine: Using SSH client type: native
	I0919 19:10:58.479146  353502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I0919 19:10:58.479156  353502 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 19:10:58.479778  353502 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44986->127.0.0.1:33193: read: connection reset by peer
	I0919 19:11:01.627427  353502 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-310211
	
	I0919 19:11:01.627452  353502 ubuntu.go:169] provisioning hostname "ha-310211"
	I0919 19:11:01.627522  353502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-310211
	I0919 19:11:01.644356  353502 main.go:141] libmachine: Using SSH client type: native
	I0919 19:11:01.644612  353502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I0919 19:11:01.644630  353502 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-310211 && echo "ha-310211" | sudo tee /etc/hostname
	I0919 19:11:01.799821  353502 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-310211
	
	I0919 19:11:01.799900  353502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-310211
	I0919 19:11:01.817741  353502 main.go:141] libmachine: Using SSH client type: native
	I0919 19:11:01.817998  353502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I0919 19:11:01.818022  353502 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-310211' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-310211/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-310211' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 19:11:01.960257  353502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 19:11:01.960289  353502 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19664-287261/.minikube CaCertPath:/home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19664-287261/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19664-287261/.minikube}
	I0919 19:11:01.960310  353502 ubuntu.go:177] setting up certificates
	I0919 19:11:01.960326  353502 provision.go:84] configureAuth start
	I0919 19:11:01.960391  353502 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-310211
	I0919 19:11:01.977616  353502 provision.go:143] copyHostCerts
	I0919 19:11:01.977657  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19664-287261/.minikube/ca.pem
	I0919 19:11:01.977690  353502 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-287261/.minikube/ca.pem, removing ...
	I0919 19:11:01.977714  353502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-287261/.minikube/ca.pem
	I0919 19:11:01.977793  353502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19664-287261/.minikube/ca.pem (1082 bytes)
	I0919 19:11:01.977937  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19664-287261/.minikube/cert.pem
	I0919 19:11:01.977959  353502 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-287261/.minikube/cert.pem, removing ...
	I0919 19:11:01.977968  353502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-287261/.minikube/cert.pem
	I0919 19:11:01.977999  353502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19664-287261/.minikube/cert.pem (1123 bytes)
	I0919 19:11:01.978043  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19664-287261/.minikube/key.pem
	I0919 19:11:01.978066  353502 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-287261/.minikube/key.pem, removing ...
	I0919 19:11:01.978072  353502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-287261/.minikube/key.pem
	I0919 19:11:01.978099  353502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19664-287261/.minikube/key.pem (1675 bytes)
	I0919 19:11:01.978152  353502 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19664-287261/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca-key.pem org=jenkins.ha-310211 san=[127.0.0.1 192.168.49.2 ha-310211 localhost minikube]
	I0919 19:11:02.737560  353502 provision.go:177] copyRemoteCerts
	I0919 19:11:02.737628  353502 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 19:11:02.737669  353502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-310211
	I0919 19:11:02.754961  353502 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/ha-310211/id_rsa Username:docker}
	I0919 19:11:02.857086  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 19:11:02.857169  353502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 19:11:02.883018  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 19:11:02.883084  353502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0919 19:11:02.910147  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 19:11:02.910221  353502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 19:11:02.935123  353502 provision.go:87] duration metric: took 974.782832ms to configureAuth
	I0919 19:11:02.935149  353502 ubuntu.go:193] setting minikube options for container-runtime
	I0919 19:11:02.935393  353502 config.go:182] Loaded profile config "ha-310211": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:11:02.935504  353502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-310211
	I0919 19:11:02.952653  353502 main.go:141] libmachine: Using SSH client type: native
	I0919 19:11:02.952915  353502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33193 <nil> <nil>}
	I0919 19:11:02.952950  353502 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 19:11:03.432214  353502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 19:11:03.432238  353502 machine.go:96] duration metric: took 4.976342409s to provisionDockerMachine
	I0919 19:11:03.432249  353502 start.go:293] postStartSetup for "ha-310211" (driver="docker")
	I0919 19:11:03.432262  353502 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 19:11:03.432343  353502 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 19:11:03.432386  353502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-310211
	I0919 19:11:03.454260  353502 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/ha-310211/id_rsa Username:docker}
	I0919 19:11:03.557043  353502 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 19:11:03.560453  353502 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 19:11:03.560492  353502 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 19:11:03.560523  353502 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 19:11:03.560535  353502 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 19:11:03.560546  353502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-287261/.minikube/addons for local assets ...
	I0919 19:11:03.560638  353502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-287261/.minikube/files for local assets ...
	I0919 19:11:03.560726  353502 filesync.go:149] local asset: /home/jenkins/minikube-integration/19664-287261/.minikube/files/etc/ssl/certs/2926662.pem -> 2926662.pem in /etc/ssl/certs
	I0919 19:11:03.560741  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/files/etc/ssl/certs/2926662.pem -> /etc/ssl/certs/2926662.pem
	I0919 19:11:03.560847  353502 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 19:11:03.569371  353502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/files/etc/ssl/certs/2926662.pem --> /etc/ssl/certs/2926662.pem (1708 bytes)
	I0919 19:11:03.594425  353502 start.go:296] duration metric: took 162.158192ms for postStartSetup
	I0919 19:11:03.594559  353502 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 19:11:03.594609  353502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-310211
	I0919 19:11:03.611531  353502 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/ha-310211/id_rsa Username:docker}
	I0919 19:11:03.709113  353502 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 19:11:03.713723  353502 fix.go:56] duration metric: took 5.618325668s for fixHost
	I0919 19:11:03.713751  353502 start.go:83] releasing machines lock for "ha-310211", held for 5.618384196s
	I0919 19:11:03.713830  353502 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-310211
	I0919 19:11:03.730822  353502 ssh_runner.go:195] Run: cat /version.json
	I0919 19:11:03.730882  353502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-310211
	I0919 19:11:03.730828  353502 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 19:11:03.731044  353502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-310211
	I0919 19:11:03.759083  353502 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/ha-310211/id_rsa Username:docker}
	I0919 19:11:03.760633  353502 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/ha-310211/id_rsa Username:docker}
	I0919 19:11:03.855723  353502 ssh_runner.go:195] Run: systemctl --version
	I0919 19:11:03.984869  353502 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 19:11:04.138120  353502 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 19:11:04.142618  353502 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 19:11:04.151920  353502 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0919 19:11:04.152002  353502 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 19:11:04.161531  353502 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 19:11:04.161611  353502 start.go:495] detecting cgroup driver to use...
	I0919 19:11:04.161655  353502 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0919 19:11:04.161710  353502 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 19:11:04.174670  353502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 19:11:04.187075  353502 docker.go:217] disabling cri-docker service (if available) ...
	I0919 19:11:04.187167  353502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 19:11:04.200546  353502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 19:11:04.212829  353502 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 19:11:04.300896  353502 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 19:11:04.380121  353502 docker.go:233] disabling docker service ...
	I0919 19:11:04.380194  353502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 19:11:04.393114  353502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 19:11:04.404766  353502 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 19:11:04.483926  353502 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 19:11:04.563686  353502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 19:11:04.575389  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 19:11:04.592558  353502 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0919 19:11:04.592633  353502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:11:04.602840  353502 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 19:11:04.602930  353502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:11:04.612905  353502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:11:04.623079  353502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:11:04.632859  353502 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 19:11:04.641978  353502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:11:04.652419  353502 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:11:04.661678  353502 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:11:04.672578  353502 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 19:11:04.681411  353502 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 19:11:04.689852  353502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 19:11:04.768627  353502 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 19:11:04.903626  353502 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 19:11:04.903732  353502 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 19:11:04.907970  353502 start.go:563] Will wait 60s for crictl version
	I0919 19:11:04.908164  353502 ssh_runner.go:195] Run: which crictl
	I0919 19:11:04.911675  353502 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 19:11:04.954387  353502 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0919 19:11:04.954530  353502 ssh_runner.go:195] Run: crio --version
	I0919 19:11:04.997810  353502 ssh_runner.go:195] Run: crio --version
	I0919 19:11:05.050410  353502 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0919 19:11:05.052738  353502 cli_runner.go:164] Run: docker network inspect ha-310211 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 19:11:05.068664  353502 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 19:11:05.072389  353502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 19:11:05.084991  353502 kubeadm.go:883] updating cluster {Name:ha-310211 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-310211 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false l
ogviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 19:11:05.085166  353502 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 19:11:05.085231  353502 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 19:11:05.132923  353502 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 19:11:05.132948  353502 crio.go:433] Images already preloaded, skipping extraction
	I0919 19:11:05.133005  353502 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 19:11:05.169374  353502 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 19:11:05.169399  353502 cache_images.go:84] Images are preloaded, skipping loading
	I0919 19:11:05.169410  353502 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0919 19:11:05.169511  353502 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-310211 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-310211 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 19:11:05.169596  353502 ssh_runner.go:195] Run: crio config
	I0919 19:11:05.222991  353502 cni.go:84] Creating CNI manager for ""
	I0919 19:11:05.223017  353502 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0919 19:11:05.223028  353502 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 19:11:05.223053  353502 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-310211 NodeName:ha-310211 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 19:11:05.223189  353502 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-310211"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 19:11:05.223205  353502 kube-vip.go:115] generating kube-vip config ...
	I0919 19:11:05.223259  353502 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 19:11:05.237359  353502 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0919 19:11:05.237601  353502 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 19:11:05.237758  353502 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0919 19:11:05.246800  353502 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 19:11:05.246919  353502 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 19:11:05.256328  353502 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0919 19:11:05.276201  353502 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 19:11:05.294476  353502 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0919 19:11:05.312844  353502 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0919 19:11:05.331729  353502 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 19:11:05.335704  353502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 19:11:05.346863  353502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 19:11:05.437227  353502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 19:11:05.451273  353502 certs.go:68] Setting up /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211 for IP: 192.168.49.2
	I0919 19:11:05.451345  353502 certs.go:194] generating shared ca certs ...
	I0919 19:11:05.451380  353502 certs.go:226] acquiring lock for ca certs: {Name:mk523f1ff29ba1b125a662d8a16466e488af99fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:11:05.451563  353502 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19664-287261/.minikube/ca.key
	I0919 19:11:05.451662  353502 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19664-287261/.minikube/proxy-client-ca.key
	I0919 19:11:05.451690  353502 certs.go:256] generating profile certs ...
	I0919 19:11:05.451818  353502 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211/client.key
	I0919 19:11:05.451867  353502 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211/apiserver.key.2d194291
	I0919 19:11:05.451929  353502 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211/apiserver.crt.2d194291 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0919 19:11:05.781147  353502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211/apiserver.crt.2d194291 ...
	I0919 19:11:05.781228  353502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211/apiserver.crt.2d194291: {Name:mkc2db2d62834f9683fae98870269f7d004af34c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:11:05.781475  353502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211/apiserver.key.2d194291 ...
	I0919 19:11:05.781519  353502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211/apiserver.key.2d194291: {Name:mke2c569ac4707563b32d75d98343acbcc1864e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:11:05.781671  353502 certs.go:381] copying /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211/apiserver.crt.2d194291 -> /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211/apiserver.crt
	I0919 19:11:05.781870  353502 certs.go:385] copying /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211/apiserver.key.2d194291 -> /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211/apiserver.key
	I0919 19:11:05.782067  353502 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211/proxy-client.key
	I0919 19:11:05.782103  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 19:11:05.782136  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 19:11:05.782181  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 19:11:05.782216  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 19:11:05.782247  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 19:11:05.782289  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 19:11:05.782326  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 19:11:05.782355  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 19:11:05.782436  353502 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/292666.pem (1338 bytes)
	W0919 19:11:05.782488  353502 certs.go:480] ignoring /home/jenkins/minikube-integration/19664-287261/.minikube/certs/292666_empty.pem, impossibly tiny 0 bytes
	I0919 19:11:05.782512  353502 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 19:11:05.782569  353502 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca.pem (1082 bytes)
	I0919 19:11:05.782653  353502 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/cert.pem (1123 bytes)
	I0919 19:11:05.782705  353502 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/key.pem (1675 bytes)
	I0919 19:11:05.782816  353502 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-287261/.minikube/files/etc/ssl/certs/2926662.pem (1708 bytes)
	I0919 19:11:05.782872  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:11:05.782918  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/292666.pem -> /usr/share/ca-certificates/292666.pem
	I0919 19:11:05.782953  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/files/etc/ssl/certs/2926662.pem -> /usr/share/ca-certificates/2926662.pem
	I0919 19:11:05.783573  353502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 19:11:05.809460  353502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 19:11:05.833771  353502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 19:11:05.857986  353502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 19:11:05.882233  353502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0919 19:11:05.905875  353502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 19:11:05.930241  353502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 19:11:05.954867  353502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 19:11:05.979634  353502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 19:11:06.011756  353502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/certs/292666.pem --> /usr/share/ca-certificates/292666.pem (1338 bytes)
	I0919 19:11:06.041978  353502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/files/etc/ssl/certs/2926662.pem --> /usr/share/ca-certificates/2926662.pem (1708 bytes)
	I0919 19:11:06.069194  353502 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 19:11:06.089071  353502 ssh_runner.go:195] Run: openssl version
	I0919 19:11:06.095217  353502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 19:11:06.105797  353502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:11:06.109776  353502 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:11:06.109849  353502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:11:06.117522  353502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 19:11:06.128697  353502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/292666.pem && ln -fs /usr/share/ca-certificates/292666.pem /etc/ssl/certs/292666.pem"
	I0919 19:11:06.139022  353502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/292666.pem
	I0919 19:11:06.142793  353502 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 18:58 /usr/share/ca-certificates/292666.pem
	I0919 19:11:06.142896  353502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/292666.pem
	I0919 19:11:06.150385  353502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/292666.pem /etc/ssl/certs/51391683.0"
	I0919 19:11:06.159805  353502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2926662.pem && ln -fs /usr/share/ca-certificates/2926662.pem /etc/ssl/certs/2926662.pem"
	I0919 19:11:06.169721  353502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2926662.pem
	I0919 19:11:06.173515  353502 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 18:58 /usr/share/ca-certificates/2926662.pem
	I0919 19:11:06.173584  353502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2926662.pem
	I0919 19:11:06.180786  353502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2926662.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 19:11:06.190701  353502 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 19:11:06.194558  353502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 19:11:06.201915  353502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 19:11:06.209158  353502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 19:11:06.216573  353502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 19:11:06.224185  353502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 19:11:06.231208  353502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 19:11:06.238242  353502 kubeadm.go:392] StartCluster: {Name:ha-310211 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-310211 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logv
iewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 19:11:06.238383  353502 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 19:11:06.238448  353502 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 19:11:06.277904  353502 cri.go:89] found id: ""
	I0919 19:11:06.277976  353502 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 19:11:06.286840  353502 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0919 19:11:06.286910  353502 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0919 19:11:06.286969  353502 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0919 19:11:06.295572  353502 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0919 19:11:06.296028  353502 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-310211" does not appear in /home/jenkins/minikube-integration/19664-287261/kubeconfig
	I0919 19:11:06.296153  353502 kubeconfig.go:62] /home/jenkins/minikube-integration/19664-287261/kubeconfig needs updating (will repair): [kubeconfig missing "ha-310211" cluster setting kubeconfig missing "ha-310211" context setting]
	I0919 19:11:06.296417  353502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-287261/kubeconfig: {Name:mkfb909fdfd15278a636c3045acef421204406b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:11:06.296815  353502 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19664-287261/kubeconfig
	I0919 19:11:06.297101  353502 kapi.go:59] client config for ha-310211: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211/client.crt", KeyFile:"/home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211/client.key", CAFile:"/home/jenkins/minikube-integration/19664-287261/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1a1e6c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 19:11:06.297793  353502 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0919 19:11:06.297874  353502 cert_rotation.go:140] Starting client certificate rotation controller
	I0919 19:11:06.306592  353502 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.49.2
	I0919 19:11:06.306620  353502 kubeadm.go:597] duration metric: took 19.698838ms to restartPrimaryControlPlane
	I0919 19:11:06.306631  353502 kubeadm.go:394] duration metric: took 68.399893ms to StartCluster
	I0919 19:11:06.306646  353502 settings.go:142] acquiring lock: {Name:mkc6a05e17453fceabfc207d0b4cc62ec1022659 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:11:06.306710  353502 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19664-287261/kubeconfig
	I0919 19:11:06.307290  353502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-287261/kubeconfig: {Name:mkfb909fdfd15278a636c3045acef421204406b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:11:06.307480  353502 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 19:11:06.307503  353502 start.go:241] waiting for startup goroutines ...
	I0919 19:11:06.307517  353502 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 19:11:06.307945  353502 config.go:182] Loaded profile config "ha-310211": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:11:06.313396  353502 out.go:177] * Enabled addons: 
	I0919 19:11:06.316026  353502 addons.go:510] duration metric: took 8.50323ms for enable addons: enabled=[]
	I0919 19:11:06.316135  353502 start.go:246] waiting for cluster config update ...
	I0919 19:11:06.316151  353502 start.go:255] writing updated cluster config ...
	I0919 19:11:06.319411  353502 out.go:201] 
	I0919 19:11:06.323214  353502 config.go:182] Loaded profile config "ha-310211": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:11:06.323352  353502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211/config.json ...
	I0919 19:11:06.326548  353502 out.go:177] * Starting "ha-310211-m02" control-plane node in "ha-310211" cluster
	I0919 19:11:06.329079  353502 cache.go:121] Beginning downloading kic base image for docker with crio
	I0919 19:11:06.331831  353502 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0919 19:11:06.333955  353502 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 19:11:06.333997  353502 cache.go:56] Caching tarball of preloaded images
	I0919 19:11:06.334036  353502 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0919 19:11:06.334101  353502 preload.go:172] Found /home/jenkins/minikube-integration/19664-287261/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0919 19:11:06.334117  353502 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0919 19:11:06.334250  353502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211/config.json ...
	I0919 19:11:06.353122  353502 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon, skipping pull
	I0919 19:11:06.353149  353502 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in daemon, skipping load
	I0919 19:11:06.353167  353502 cache.go:194] Successfully downloaded all kic artifacts
	I0919 19:11:06.353208  353502 start.go:360] acquireMachinesLock for ha-310211-m02: {Name:mk6ee4d017c316c91e84a269c01a8b0dc1e75f83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 19:11:06.353272  353502 start.go:364] duration metric: took 41.256µs to acquireMachinesLock for "ha-310211-m02"
	I0919 19:11:06.353298  353502 start.go:96] Skipping create...Using existing machine configuration
	I0919 19:11:06.353308  353502 fix.go:54] fixHost starting: m02
	I0919 19:11:06.353571  353502 cli_runner.go:164] Run: docker container inspect ha-310211-m02 --format={{.State.Status}}
	I0919 19:11:06.370568  353502 fix.go:112] recreateIfNeeded on ha-310211-m02: state=Stopped err=<nil>
	W0919 19:11:06.370597  353502 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 19:11:06.373853  353502 out.go:177] * Restarting existing docker container for "ha-310211-m02" ...
	I0919 19:11:06.375989  353502 cli_runner.go:164] Run: docker start ha-310211-m02
	I0919 19:11:06.658282  353502 cli_runner.go:164] Run: docker container inspect ha-310211-m02 --format={{.State.Status}}
	I0919 19:11:06.681746  353502 kic.go:430] container "ha-310211-m02" state is running.
	I0919 19:11:06.682117  353502 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-310211-m02
	I0919 19:11:06.703586  353502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211/config.json ...
	I0919 19:11:06.703833  353502 machine.go:93] provisionDockerMachine start ...
	I0919 19:11:06.703901  353502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-310211-m02
	I0919 19:11:06.723269  353502 main.go:141] libmachine: Using SSH client type: native
	I0919 19:11:06.723509  353502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I0919 19:11:06.723525  353502 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 19:11:06.724522  353502 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0919 19:11:09.919390  353502 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-310211-m02
	
	I0919 19:11:09.919419  353502 ubuntu.go:169] provisioning hostname "ha-310211-m02"
	I0919 19:11:09.919491  353502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-310211-m02
	I0919 19:11:09.944749  353502 main.go:141] libmachine: Using SSH client type: native
	I0919 19:11:09.945003  353502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I0919 19:11:09.945021  353502 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-310211-m02 && echo "ha-310211-m02" | sudo tee /etc/hostname
	I0919 19:11:10.171946  353502 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-310211-m02
	
	I0919 19:11:10.172089  353502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-310211-m02
	I0919 19:11:10.200197  353502 main.go:141] libmachine: Using SSH client type: native
	I0919 19:11:10.200459  353502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I0919 19:11:10.200483  353502 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-310211-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-310211-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-310211-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 19:11:10.405644  353502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 19:11:10.405713  353502 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19664-287261/.minikube CaCertPath:/home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19664-287261/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19664-287261/.minikube}
	I0919 19:11:10.405747  353502 ubuntu.go:177] setting up certificates
	I0919 19:11:10.405772  353502 provision.go:84] configureAuth start
	I0919 19:11:10.405856  353502 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-310211-m02
	I0919 19:11:10.432854  353502 provision.go:143] copyHostCerts
	I0919 19:11:10.432904  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19664-287261/.minikube/cert.pem
	I0919 19:11:10.432940  353502 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-287261/.minikube/cert.pem, removing ...
	I0919 19:11:10.432952  353502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-287261/.minikube/cert.pem
	I0919 19:11:10.433034  353502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19664-287261/.minikube/cert.pem (1123 bytes)
	I0919 19:11:10.433116  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19664-287261/.minikube/key.pem
	I0919 19:11:10.433138  353502 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-287261/.minikube/key.pem, removing ...
	I0919 19:11:10.433147  353502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-287261/.minikube/key.pem
	I0919 19:11:10.433176  353502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19664-287261/.minikube/key.pem (1675 bytes)
	I0919 19:11:10.433224  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19664-287261/.minikube/ca.pem
	I0919 19:11:10.433245  353502 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-287261/.minikube/ca.pem, removing ...
	I0919 19:11:10.433252  353502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-287261/.minikube/ca.pem
	I0919 19:11:10.433283  353502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19664-287261/.minikube/ca.pem (1082 bytes)
	I0919 19:11:10.433339  353502 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19664-287261/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca-key.pem org=jenkins.ha-310211-m02 san=[127.0.0.1 192.168.49.3 ha-310211-m02 localhost minikube]
	I0919 19:11:10.738043  353502 provision.go:177] copyRemoteCerts
	I0919 19:11:10.738139  353502 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 19:11:10.738202  353502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-310211-m02
	I0919 19:11:10.755791  353502 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/ha-310211-m02/id_rsa Username:docker}
	I0919 19:11:10.881818  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 19:11:10.881889  353502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 19:11:10.942155  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 19:11:10.942234  353502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 19:11:11.004540  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 19:11:11.004673  353502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 19:11:11.062216  353502 provision.go:87] duration metric: took 656.41105ms to configureAuth
	I0919 19:11:11.062246  353502 ubuntu.go:193] setting minikube options for container-runtime
	I0919 19:11:11.062519  353502 config.go:182] Loaded profile config "ha-310211": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:11:11.062650  353502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-310211-m02
	I0919 19:11:11.109884  353502 main.go:141] libmachine: Using SSH client type: native
	I0919 19:11:11.110145  353502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33198 <nil> <nil>}
	I0919 19:11:11.110168  353502 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 19:11:11.611213  353502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 19:11:11.611251  353502 machine.go:96] duration metric: took 4.907401097s to provisionDockerMachine
	I0919 19:11:11.611263  353502 start.go:293] postStartSetup for "ha-310211-m02" (driver="docker")
	I0919 19:11:11.611276  353502 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 19:11:11.611372  353502 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 19:11:11.611429  353502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-310211-m02
	I0919 19:11:11.635789  353502 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/ha-310211-m02/id_rsa Username:docker}
	I0919 19:11:11.816570  353502 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 19:11:11.834369  353502 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 19:11:11.834408  353502 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 19:11:11.834420  353502 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 19:11:11.834428  353502 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 19:11:11.834440  353502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-287261/.minikube/addons for local assets ...
	I0919 19:11:11.834505  353502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-287261/.minikube/files for local assets ...
	I0919 19:11:11.834586  353502 filesync.go:149] local asset: /home/jenkins/minikube-integration/19664-287261/.minikube/files/etc/ssl/certs/2926662.pem -> 2926662.pem in /etc/ssl/certs
	I0919 19:11:11.834600  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/files/etc/ssl/certs/2926662.pem -> /etc/ssl/certs/2926662.pem
	I0919 19:11:11.834706  353502 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 19:11:11.871754  353502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/files/etc/ssl/certs/2926662.pem --> /etc/ssl/certs/2926662.pem (1708 bytes)
	I0919 19:11:11.927118  353502 start.go:296] duration metric: took 315.838578ms for postStartSetup
	I0919 19:11:11.927202  353502 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 19:11:11.927249  353502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-310211-m02
	I0919 19:11:11.959051  353502 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/ha-310211-m02/id_rsa Username:docker}
	I0919 19:11:12.123221  353502 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 19:11:12.145212  353502 fix.go:56] duration metric: took 5.791895277s for fixHost
	I0919 19:11:12.145241  353502 start.go:83] releasing machines lock for "ha-310211-m02", held for 5.791955159s
	I0919 19:11:12.145312  353502 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-310211-m02
	I0919 19:11:12.178523  353502 out.go:177] * Found network options:
	I0919 19:11:12.181174  353502 out.go:177]   - NO_PROXY=192.168.49.2
	W0919 19:11:12.183473  353502 proxy.go:119] fail to check proxy env: Error ip not in block
	W0919 19:11:12.183532  353502 proxy.go:119] fail to check proxy env: Error ip not in block
	I0919 19:11:12.183602  353502 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 19:11:12.183643  353502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-310211-m02
	I0919 19:11:12.183658  353502 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 19:11:12.183721  353502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-310211-m02
	I0919 19:11:12.223601  353502 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/ha-310211-m02/id_rsa Username:docker}
	I0919 19:11:12.226186  353502 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33198 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/ha-310211-m02/id_rsa Username:docker}
	I0919 19:11:12.709649  353502 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 19:11:12.721284  353502 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 19:11:12.757198  353502 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0919 19:11:12.757447  353502 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 19:11:12.782901  353502 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 19:11:12.782995  353502 start.go:495] detecting cgroup driver to use...
	I0919 19:11:12.783070  353502 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0919 19:11:12.783191  353502 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 19:11:12.832894  353502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 19:11:12.880252  353502 docker.go:217] disabling cri-docker service (if available) ...
	I0919 19:11:12.880450  353502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 19:11:12.933811  353502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 19:11:12.982314  353502 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 19:11:13.323090  353502 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 19:11:13.632833  353502 docker.go:233] disabling docker service ...
	I0919 19:11:13.632971  353502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 19:11:13.696757  353502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 19:11:13.735890  353502 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 19:11:14.043872  353502 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 19:11:14.394382  353502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 19:11:14.448699  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 19:11:14.510711  353502 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0919 19:11:14.510781  353502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:11:14.539072  353502 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 19:11:14.539144  353502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:11:14.561159  353502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:11:14.580904  353502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:11:14.600838  353502 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 19:11:14.620195  353502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:11:14.646988  353502 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:11:14.670578  353502 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:11:14.694686  353502 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 19:11:14.721530  353502 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 19:11:14.754489  353502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 19:11:15.071009  353502 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 19:11:15.558102  353502 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 19:11:15.558174  353502 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 19:11:15.564019  353502 start.go:563] Will wait 60s for crictl version
	I0919 19:11:15.564081  353502 ssh_runner.go:195] Run: which crictl
	I0919 19:11:15.567800  353502 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 19:11:15.669568  353502 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0919 19:11:15.669655  353502 ssh_runner.go:195] Run: crio --version
	I0919 19:11:15.788669  353502 ssh_runner.go:195] Run: crio --version
	I0919 19:11:15.961944  353502 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0919 19:11:15.963965  353502 out.go:177]   - env NO_PROXY=192.168.49.2
	I0919 19:11:15.965814  353502 cli_runner.go:164] Run: docker network inspect ha-310211 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 19:11:15.993652  353502 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 19:11:15.998971  353502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 19:11:16.021713  353502 mustload.go:65] Loading cluster: ha-310211
	I0919 19:11:16.021956  353502 config.go:182] Loaded profile config "ha-310211": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:11:16.022243  353502 cli_runner.go:164] Run: docker container inspect ha-310211 --format={{.State.Status}}
	I0919 19:11:16.061885  353502 host.go:66] Checking if "ha-310211" exists ...
	I0919 19:11:16.062184  353502 certs.go:68] Setting up /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211 for IP: 192.168.49.3
	I0919 19:11:16.062194  353502 certs.go:194] generating shared ca certs ...
	I0919 19:11:16.062208  353502 certs.go:226] acquiring lock for ca certs: {Name:mk523f1ff29ba1b125a662d8a16466e488af99fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:11:16.062321  353502 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19664-287261/.minikube/ca.key
	I0919 19:11:16.062362  353502 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19664-287261/.minikube/proxy-client-ca.key
	I0919 19:11:16.062369  353502 certs.go:256] generating profile certs ...
	I0919 19:11:16.062450  353502 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211/client.key
	I0919 19:11:16.062527  353502 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211/apiserver.key.b202fd99
	I0919 19:11:16.062571  353502 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211/proxy-client.key
	I0919 19:11:16.062606  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 19:11:16.062620  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 19:11:16.062631  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 19:11:16.062641  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 19:11:16.062653  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 19:11:16.062664  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 19:11:16.062674  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 19:11:16.062688  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 19:11:16.062741  353502 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/292666.pem (1338 bytes)
	W0919 19:11:16.062768  353502 certs.go:480] ignoring /home/jenkins/minikube-integration/19664-287261/.minikube/certs/292666_empty.pem, impossibly tiny 0 bytes
	I0919 19:11:16.062776  353502 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 19:11:16.062799  353502 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca.pem (1082 bytes)
	I0919 19:11:16.062825  353502 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/cert.pem (1123 bytes)
	I0919 19:11:16.062849  353502 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/key.pem (1675 bytes)
	I0919 19:11:16.062925  353502 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-287261/.minikube/files/etc/ssl/certs/2926662.pem (1708 bytes)
	I0919 19:11:16.062959  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/292666.pem -> /usr/share/ca-certificates/292666.pem
	I0919 19:11:16.062972  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/files/etc/ssl/certs/2926662.pem -> /usr/share/ca-certificates/2926662.pem
	I0919 19:11:16.062983  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:11:16.063038  353502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-310211
	I0919 19:11:16.094003  353502 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/ha-310211/id_rsa Username:docker}
	I0919 19:11:16.212409  353502 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 19:11:16.226998  353502 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 19:11:16.250945  353502 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 19:11:16.260524  353502 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 19:11:16.285227  353502 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 19:11:16.298591  353502 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 19:11:16.311072  353502 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 19:11:16.321696  353502 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 19:11:16.352207  353502 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 19:11:16.362923  353502 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 19:11:16.393204  353502 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 19:11:16.404706  353502 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 19:11:16.428248  353502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 19:11:16.466259  353502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 19:11:16.512348  353502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 19:11:16.554113  353502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 19:11:16.589491  353502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0919 19:11:16.635446  353502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 19:11:16.680065  353502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 19:11:16.720585  353502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 19:11:16.798564  353502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/certs/292666.pem --> /usr/share/ca-certificates/292666.pem (1338 bytes)
	I0919 19:11:16.845446  353502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/files/etc/ssl/certs/2926662.pem --> /usr/share/ca-certificates/2926662.pem (1708 bytes)
	I0919 19:11:16.881361  353502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 19:11:16.930389  353502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 19:11:16.967361  353502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 19:11:16.999045  353502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 19:11:17.024982  353502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 19:11:17.059123  353502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 19:11:17.096522  353502 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 19:11:17.130624  353502 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 19:11:17.166395  353502 ssh_runner.go:195] Run: openssl version
	I0919 19:11:17.178206  353502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/292666.pem && ln -fs /usr/share/ca-certificates/292666.pem /etc/ssl/certs/292666.pem"
	I0919 19:11:17.192916  353502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/292666.pem
	I0919 19:11:17.196782  353502 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 18:58 /usr/share/ca-certificates/292666.pem
	I0919 19:11:17.196942  353502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/292666.pem
	I0919 19:11:17.210432  353502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/292666.pem /etc/ssl/certs/51391683.0"
	I0919 19:11:17.221639  353502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2926662.pem && ln -fs /usr/share/ca-certificates/2926662.pem /etc/ssl/certs/2926662.pem"
	I0919 19:11:17.236951  353502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2926662.pem
	I0919 19:11:17.248606  353502 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 18:58 /usr/share/ca-certificates/2926662.pem
	I0919 19:11:17.248750  353502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2926662.pem
	I0919 19:11:17.257205  353502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2926662.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 19:11:17.277756  353502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 19:11:17.297753  353502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:11:17.302128  353502 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:11:17.302264  353502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:11:17.312346  353502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 19:11:17.325655  353502 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 19:11:17.330365  353502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 19:11:17.337929  353502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 19:11:17.346778  353502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 19:11:17.356634  353502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 19:11:17.373718  353502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 19:11:17.385194  353502 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 19:11:17.397764  353502 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.31.1 crio true true} ...
	I0919 19:11:17.397935  353502 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-310211-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-310211 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 19:11:17.397992  353502 kube-vip.go:115] generating kube-vip config ...
	I0919 19:11:17.398076  353502 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 19:11:17.412649  353502 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0919 19:11:17.412806  353502 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 19:11:17.412904  353502 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0919 19:11:17.430120  353502 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 19:11:17.430290  353502 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 19:11:17.439885  353502 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0919 19:11:17.466159  353502 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 19:11:17.490223  353502 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0919 19:11:17.515711  353502 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 19:11:17.520210  353502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 19:11:17.539067  353502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 19:11:17.715809  353502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 19:11:17.734819  353502 config.go:182] Loaded profile config "ha-310211": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:11:17.734544  353502 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 19:11:17.739426  353502 out.go:177] * Verifying Kubernetes components...
	I0919 19:11:17.741548  353502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 19:11:17.927175  353502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 19:11:17.941135  353502 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19664-287261/kubeconfig
	I0919 19:11:17.941497  353502 kapi.go:59] client config for ha-310211: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211/client.crt", KeyFile:"/home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211/client.key", CAFile:"/home/jenkins/minikube-integration/19664-287261/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1a1e6c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 19:11:17.941598  353502 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 19:11:17.941912  353502 node_ready.go:35] waiting up to 6m0s for node "ha-310211-m02" to be "Ready" ...
	I0919 19:11:17.942072  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211-m02
	I0919 19:11:17.942099  353502 round_trippers.go:469] Request Headers:
	I0919 19:11:17.942129  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:11:17.942162  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:11:30.606246  353502 round_trippers.go:574] Response Status: 500 Internal Server Error in 12664 milliseconds
	I0919 19:11:30.607170  353502 node_ready.go:53] error getting node "ha-310211-m02": etcdserver: request timed out
	I0919 19:11:30.607244  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211-m02
	I0919 19:11:30.607255  353502 round_trippers.go:469] Request Headers:
	I0919 19:11:30.607264  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:11:30.607275  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:11:39.268833  353502 round_trippers.go:574] Response Status: 500 Internal Server Error in 8661 milliseconds
	I0919 19:11:39.269083  353502 node_ready.go:53] error getting node "ha-310211-m02": etcdserver: leader changed
	I0919 19:11:39.269150  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211-m02
	I0919 19:11:39.269157  353502 round_trippers.go:469] Request Headers:
	I0919 19:11:39.269167  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:11:39.269172  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:11:39.290593  353502 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0919 19:11:39.291447  353502 node_ready.go:49] node "ha-310211-m02" has status "Ready":"True"
	I0919 19:11:39.291462  353502 node_ready.go:38] duration metric: took 21.349513715s for node "ha-310211-m02" to be "Ready" ...
	I0919 19:11:39.291472  353502 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 19:11:39.291515  353502 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 19:11:39.291527  353502 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 19:11:39.291586  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0919 19:11:39.291591  353502 round_trippers.go:469] Request Headers:
	I0919 19:11:39.291598  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:11:39.291602  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:11:39.301381  353502 round_trippers.go:574] Response Status: 429 Too Many Requests in 9 milliseconds
	I0919 19:11:40.301779  353502 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0919 19:11:40.301837  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0919 19:11:40.301848  353502 round_trippers.go:469] Request Headers:
	I0919 19:11:40.301857  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:11:40.301866  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:11:40.312380  353502 round_trippers.go:574] Response Status: 429 Too Many Requests in 10 milliseconds
	I0919 19:11:41.314720  353502 with_retry.go:234] Got a Retry-After 1s response for attempt 2 to https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0919 19:11:41.314767  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0919 19:11:41.314773  353502 round_trippers.go:469] Request Headers:
	I0919 19:11:41.314783  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:11:41.314788  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:11:41.322494  353502 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0919 19:11:41.337636  353502 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-drds6" in "kube-system" namespace to be "Ready" ...
	I0919 19:11:41.337903  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-drds6
	I0919 19:11:41.337952  353502 round_trippers.go:469] Request Headers:
	I0919 19:11:41.337985  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:11:41.338005  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:11:41.342034  353502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:11:41.342687  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:11:41.342704  353502 round_trippers.go:469] Request Headers:
	I0919 19:11:41.342717  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:11:41.342726  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:11:41.345517  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:11:41.346129  353502 pod_ready.go:93] pod "coredns-7c65d6cfc9-drds6" in "kube-system" namespace has status "Ready":"True"
	I0919 19:11:41.346146  353502 pod_ready.go:82] duration metric: took 8.371577ms for pod "coredns-7c65d6cfc9-drds6" in "kube-system" namespace to be "Ready" ...
	I0919 19:11:41.346157  353502 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rcmrq" in "kube-system" namespace to be "Ready" ...
	I0919 19:11:41.346222  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-rcmrq
	I0919 19:11:41.346237  353502 round_trippers.go:469] Request Headers:
	I0919 19:11:41.346247  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:11:41.346257  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:11:41.351523  353502 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0919 19:11:41.352709  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:11:41.352732  353502 round_trippers.go:469] Request Headers:
	I0919 19:11:41.352742  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:11:41.352747  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:11:41.356507  353502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:11:41.357508  353502 pod_ready.go:93] pod "coredns-7c65d6cfc9-rcmrq" in "kube-system" namespace has status "Ready":"True"
	I0919 19:11:41.357534  353502 pod_ready.go:82] duration metric: took 11.368629ms for pod "coredns-7c65d6cfc9-rcmrq" in "kube-system" namespace to be "Ready" ...
	I0919 19:11:41.357568  353502 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-310211" in "kube-system" namespace to be "Ready" ...
	I0919 19:11:41.357665  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-310211
	I0919 19:11:41.357678  353502 round_trippers.go:469] Request Headers:
	I0919 19:11:41.357696  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:11:41.357709  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:11:41.361664  353502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:11:41.362770  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:11:41.362793  353502 round_trippers.go:469] Request Headers:
	I0919 19:11:41.362824  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:11:41.362836  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:11:41.365626  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:11:41.366660  353502 pod_ready.go:93] pod "etcd-ha-310211" in "kube-system" namespace has status "Ready":"True"
	I0919 19:11:41.366685  353502 pod_ready.go:82] duration metric: took 9.101785ms for pod "etcd-ha-310211" in "kube-system" namespace to be "Ready" ...
	I0919 19:11:41.366722  353502 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-310211-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:11:41.366810  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-310211-m02
	I0919 19:11:41.366820  353502 round_trippers.go:469] Request Headers:
	I0919 19:11:41.366839  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:11:41.366850  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:11:41.371152  353502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:11:41.372167  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211-m02
	I0919 19:11:41.372188  353502 round_trippers.go:469] Request Headers:
	I0919 19:11:41.372197  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:11:41.372203  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:11:41.378260  353502 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0919 19:11:41.379331  353502 pod_ready.go:93] pod "etcd-ha-310211-m02" in "kube-system" namespace has status "Ready":"True"
	I0919 19:11:41.379356  353502 pod_ready.go:82] duration metric: took 12.624738ms for pod "etcd-ha-310211-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:11:41.379391  353502 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-310211-m03" in "kube-system" namespace to be "Ready" ...
	I0919 19:11:41.379494  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-310211-m03
	I0919 19:11:41.379508  353502 round_trippers.go:469] Request Headers:
	I0919 19:11:41.379529  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:11:41.379542  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:11:41.382980  353502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:11:41.515315  353502 request.go:632] Waited for 131.119681ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-310211-m03
	I0919 19:11:41.515425  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211-m03
	I0919 19:11:41.515437  353502 round_trippers.go:469] Request Headers:
	I0919 19:11:41.515459  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:11:41.515471  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:11:41.518224  353502 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0919 19:11:41.518392  353502 pod_ready.go:98] node "ha-310211-m03" hosting pod "etcd-ha-310211-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-310211-m03": nodes "ha-310211-m03" not found
	I0919 19:11:41.518424  353502 pod_ready.go:82] duration metric: took 139.012906ms for pod "etcd-ha-310211-m03" in "kube-system" namespace to be "Ready" ...
	E0919 19:11:41.518444  353502 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-310211-m03" hosting pod "etcd-ha-310211-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-310211-m03": nodes "ha-310211-m03" not found
	I0919 19:11:41.518466  353502 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-310211" in "kube-system" namespace to be "Ready" ...
	I0919 19:11:41.715694  353502 request.go:632] Waited for 197.14369ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-310211
	I0919 19:11:41.715775  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-310211
	I0919 19:11:41.715804  353502 round_trippers.go:469] Request Headers:
	I0919 19:11:41.715814  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:11:41.715819  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:11:41.718838  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:11:41.915313  353502 request.go:632] Waited for 195.312564ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:11:41.915416  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:11:41.915428  353502 round_trippers.go:469] Request Headers:
	I0919 19:11:41.915437  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:11:41.915442  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:11:41.918077  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:11:41.918730  353502 pod_ready.go:93] pod "kube-apiserver-ha-310211" in "kube-system" namespace has status "Ready":"True"
	I0919 19:11:41.918754  353502 pod_ready.go:82] duration metric: took 400.274394ms for pod "kube-apiserver-ha-310211" in "kube-system" namespace to be "Ready" ...
	I0919 19:11:41.918792  353502 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-310211-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:11:42.115191  353502 request.go:632] Waited for 196.320287ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-310211-m02
	I0919 19:11:42.115306  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-310211-m02
	I0919 19:11:42.115329  353502 round_trippers.go:469] Request Headers:
	I0919 19:11:42.115354  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:11:42.115361  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:11:42.119069  353502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:11:42.315466  353502 request.go:632] Waited for 195.406406ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-310211-m02
	I0919 19:11:42.315551  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211-m02
	I0919 19:11:42.315567  353502 round_trippers.go:469] Request Headers:
	I0919 19:11:42.315577  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:11:42.315588  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:11:42.327098  353502 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0919 19:11:42.327748  353502 pod_ready.go:93] pod "kube-apiserver-ha-310211-m02" in "kube-system" namespace has status "Ready":"True"
	I0919 19:11:42.327779  353502 pod_ready.go:82] duration metric: took 408.970351ms for pod "kube-apiserver-ha-310211-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:11:42.327792  353502 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-310211-m03" in "kube-system" namespace to be "Ready" ...
	I0919 19:11:42.515565  353502 request.go:632] Waited for 187.678531ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-310211-m03
	I0919 19:11:42.515723  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-310211-m03
	I0919 19:11:42.515734  353502 round_trippers.go:469] Request Headers:
	I0919 19:11:42.515747  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:11:42.515753  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:11:42.576137  353502 round_trippers.go:574] Response Status: 200 OK in 60 milliseconds
	I0919 19:11:42.715501  353502 request.go:632] Waited for 138.185761ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-310211-m03
	I0919 19:11:42.715595  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211-m03
	I0919 19:11:42.715607  353502 round_trippers.go:469] Request Headers:
	I0919 19:11:42.715622  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:11:42.715656  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:11:42.729028  353502 round_trippers.go:574] Response Status: 404 Not Found in 13 milliseconds
	I0919 19:11:42.729346  353502 pod_ready.go:98] node "ha-310211-m03" hosting pod "kube-apiserver-ha-310211-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-310211-m03": nodes "ha-310211-m03" not found
	I0919 19:11:42.729372  353502 pod_ready.go:82] duration metric: took 401.552048ms for pod "kube-apiserver-ha-310211-m03" in "kube-system" namespace to be "Ready" ...
	E0919 19:11:42.729410  353502 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-310211-m03" hosting pod "kube-apiserver-ha-310211-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-310211-m03": nodes "ha-310211-m03" not found
	I0919 19:11:42.729420  353502 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-310211" in "kube-system" namespace to be "Ready" ...
	I0919 19:11:42.914860  353502 request.go:632] Waited for 185.343372ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-310211
	I0919 19:11:42.914949  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-310211
	I0919 19:11:42.914962  353502 round_trippers.go:469] Request Headers:
	I0919 19:11:42.914972  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:11:42.914980  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:11:42.931397  353502 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0919 19:11:43.115295  353502 request.go:632] Waited for 181.320676ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:11:43.115384  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:11:43.115394  353502 round_trippers.go:469] Request Headers:
	I0919 19:11:43.115404  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:11:43.115417  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:11:43.123209  353502 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0919 19:11:43.123913  353502 pod_ready.go:93] pod "kube-controller-manager-ha-310211" in "kube-system" namespace has status "Ready":"True"
	I0919 19:11:43.123937  353502 pod_ready.go:82] duration metric: took 394.501064ms for pod "kube-controller-manager-ha-310211" in "kube-system" namespace to be "Ready" ...
	I0919 19:11:43.123951  353502 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-310211-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:11:43.315253  353502 request.go:632] Waited for 191.1891ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-310211-m02
	I0919 19:11:43.315331  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-310211-m02
	I0919 19:11:43.315345  353502 round_trippers.go:469] Request Headers:
	I0919 19:11:43.315353  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:11:43.315358  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:11:43.319234  353502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:11:43.514780  353502 request.go:632] Waited for 194.245468ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-310211-m02
	I0919 19:11:43.514933  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211-m02
	I0919 19:11:43.514959  353502 round_trippers.go:469] Request Headers:
	I0919 19:11:43.514982  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:11:43.515011  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:11:43.520942  353502 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0919 19:11:43.521782  353502 pod_ready.go:93] pod "kube-controller-manager-ha-310211-m02" in "kube-system" namespace has status "Ready":"True"
	I0919 19:11:43.521853  353502 pod_ready.go:82] duration metric: took 397.869963ms for pod "kube-controller-manager-ha-310211-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:11:43.521889  353502 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-310211-m03" in "kube-system" namespace to be "Ready" ...
	I0919 19:11:43.715317  353502 request.go:632] Waited for 193.300219ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-310211-m03
	I0919 19:11:43.715386  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-310211-m03
	I0919 19:11:43.715398  353502 round_trippers.go:469] Request Headers:
	I0919 19:11:43.715409  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:11:43.715424  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:11:43.718474  353502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:11:43.915615  353502 request.go:632] Waited for 196.192318ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-310211-m03
	I0919 19:11:43.915731  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211-m03
	I0919 19:11:43.915768  353502 round_trippers.go:469] Request Headers:
	I0919 19:11:43.915798  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:11:43.915822  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:11:43.918647  353502 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0919 19:11:43.918780  353502 pod_ready.go:98] node "ha-310211-m03" hosting pod "kube-controller-manager-ha-310211-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-310211-m03": nodes "ha-310211-m03" not found
	I0919 19:11:43.918797  353502 pod_ready.go:82] duration metric: took 396.857055ms for pod "kube-controller-manager-ha-310211-m03" in "kube-system" namespace to be "Ready" ...
	E0919 19:11:43.918810  353502 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-310211-m03" hosting pod "kube-controller-manager-ha-310211-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-310211-m03": nodes "ha-310211-m03" not found
	I0919 19:11:43.918818  353502 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9jg6c" in "kube-system" namespace to be "Ready" ...
	I0919 19:11:44.114909  353502 request.go:632] Waited for 196.010426ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9jg6c
	I0919 19:11:44.115026  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9jg6c
	I0919 19:11:44.115098  353502 round_trippers.go:469] Request Headers:
	I0919 19:11:44.115125  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:11:44.115159  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:11:44.118023  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:11:44.315350  353502 request.go:632] Waited for 196.343226ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-310211-m04
	I0919 19:11:44.315424  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211-m04
	I0919 19:11:44.315436  353502 round_trippers.go:469] Request Headers:
	I0919 19:11:44.315445  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:11:44.315450  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:11:44.318301  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:11:44.318884  353502 pod_ready.go:93] pod "kube-proxy-9jg6c" in "kube-system" namespace has status "Ready":"True"
	I0919 19:11:44.318903  353502 pod_ready.go:82] duration metric: took 400.068547ms for pod "kube-proxy-9jg6c" in "kube-system" namespace to be "Ready" ...
	I0919 19:11:44.318915  353502 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f2xdc" in "kube-system" namespace to be "Ready" ...
	I0919 19:11:44.515205  353502 request.go:632] Waited for 196.224203ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f2xdc
	I0919 19:11:44.515271  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f2xdc
	I0919 19:11:44.515283  353502 round_trippers.go:469] Request Headers:
	I0919 19:11:44.515292  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:11:44.515302  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:11:44.518265  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:11:44.715463  353502 request.go:632] Waited for 196.337114ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-310211-m03
	I0919 19:11:44.715520  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211-m03
	I0919 19:11:44.715527  353502 round_trippers.go:469] Request Headers:
	I0919 19:11:44.715537  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:11:44.715546  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:11:44.718256  353502 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0919 19:11:44.718426  353502 pod_ready.go:98] node "ha-310211-m03" hosting pod "kube-proxy-f2xdc" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-310211-m03": nodes "ha-310211-m03" not found
	I0919 19:11:44.718447  353502 pod_ready.go:82] duration metric: took 399.523741ms for pod "kube-proxy-f2xdc" in "kube-system" namespace to be "Ready" ...
	E0919 19:11:44.718472  353502 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-310211-m03" hosting pod "kube-proxy-f2xdc" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-310211-m03": nodes "ha-310211-m03" not found
	I0919 19:11:44.718481  353502 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-lbfq4" in "kube-system" namespace to be "Ready" ...
	I0919 19:11:44.915740  353502 request.go:632] Waited for 197.159055ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lbfq4
	I0919 19:11:44.915823  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lbfq4
	I0919 19:11:44.915834  353502 round_trippers.go:469] Request Headers:
	I0919 19:11:44.915843  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:11:44.915847  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:11:44.918813  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:11:45.117888  353502 request.go:632] Waited for 198.325392ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:11:45.117963  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:11:45.117970  353502 round_trippers.go:469] Request Headers:
	I0919 19:11:45.117979  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:11:45.117985  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:11:45.123549  353502 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0919 19:11:45.125039  353502 pod_ready.go:93] pod "kube-proxy-lbfq4" in "kube-system" namespace has status "Ready":"True"
	I0919 19:11:45.125068  353502 pod_ready.go:82] duration metric: took 406.567553ms for pod "kube-proxy-lbfq4" in "kube-system" namespace to be "Ready" ...
	I0919 19:11:45.125082  353502 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vsrc4" in "kube-system" namespace to be "Ready" ...
	I0919 19:11:45.315872  353502 request.go:632] Waited for 190.69876ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vsrc4
	I0919 19:11:45.316036  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vsrc4
	I0919 19:11:45.316056  353502 round_trippers.go:469] Request Headers:
	I0919 19:11:45.316065  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:11:45.316071  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:11:45.319631  353502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:11:45.514881  353502 request.go:632] Waited for 194.246918ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-310211-m02
	I0919 19:11:45.514945  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211-m02
	I0919 19:11:45.514960  353502 round_trippers.go:469] Request Headers:
	I0919 19:11:45.514980  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:11:45.514988  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:11:45.517718  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:11:45.518546  353502 pod_ready.go:93] pod "kube-proxy-vsrc4" in "kube-system" namespace has status "Ready":"True"
	I0919 19:11:45.518571  353502 pod_ready.go:82] duration metric: took 393.477673ms for pod "kube-proxy-vsrc4" in "kube-system" namespace to be "Ready" ...
	I0919 19:11:45.518583  353502 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-310211" in "kube-system" namespace to be "Ready" ...
	I0919 19:11:45.715594  353502 request.go:632] Waited for 196.922246ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-310211
	I0919 19:11:45.715679  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-310211
	I0919 19:11:45.715693  353502 round_trippers.go:469] Request Headers:
	I0919 19:11:45.715703  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:11:45.715708  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:11:45.718588  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:11:45.915615  353502 request.go:632] Waited for 196.33543ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:11:45.915715  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:11:45.915729  353502 round_trippers.go:469] Request Headers:
	I0919 19:11:45.915738  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:11:45.915757  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:11:45.918660  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:11:45.919271  353502 pod_ready.go:93] pod "kube-scheduler-ha-310211" in "kube-system" namespace has status "Ready":"True"
	I0919 19:11:45.919291  353502 pod_ready.go:82] duration metric: took 400.698675ms for pod "kube-scheduler-ha-310211" in "kube-system" namespace to be "Ready" ...
	I0919 19:11:45.919303  353502 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-310211-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:11:46.115757  353502 request.go:632] Waited for 196.347089ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-310211-m02
	I0919 19:11:46.115832  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-310211-m02
	I0919 19:11:46.115844  353502 round_trippers.go:469] Request Headers:
	I0919 19:11:46.115853  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:11:46.115858  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:11:46.120045  353502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:11:46.315400  353502 request.go:632] Waited for 194.722536ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-310211-m02
	I0919 19:11:46.315506  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211-m02
	I0919 19:11:46.315519  353502 round_trippers.go:469] Request Headers:
	I0919 19:11:46.315527  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:11:46.315532  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:11:46.318345  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:11:46.318969  353502 pod_ready.go:93] pod "kube-scheduler-ha-310211-m02" in "kube-system" namespace has status "Ready":"True"
	I0919 19:11:46.319021  353502 pod_ready.go:82] duration metric: took 399.675723ms for pod "kube-scheduler-ha-310211-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:11:46.319041  353502 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-310211-m03" in "kube-system" namespace to be "Ready" ...
	I0919 19:11:46.515746  353502 request.go:632] Waited for 196.632759ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-310211-m03
	I0919 19:11:46.515815  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-310211-m03
	I0919 19:11:46.515822  353502 round_trippers.go:469] Request Headers:
	I0919 19:11:46.515836  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:11:46.515847  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:11:46.518761  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:11:46.714789  353502 request.go:632] Waited for 195.249569ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-310211-m03
	I0919 19:11:46.714893  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211-m03
	I0919 19:11:46.714913  353502 round_trippers.go:469] Request Headers:
	I0919 19:11:46.714947  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:11:46.714971  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:11:46.718077  353502 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0919 19:11:46.718377  353502 pod_ready.go:98] node "ha-310211-m03" hosting pod "kube-scheduler-ha-310211-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-310211-m03": nodes "ha-310211-m03" not found
	I0919 19:11:46.718410  353502 pod_ready.go:82] duration metric: took 399.360907ms for pod "kube-scheduler-ha-310211-m03" in "kube-system" namespace to be "Ready" ...
	E0919 19:11:46.718429  353502 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-310211-m03" hosting pod "kube-scheduler-ha-310211-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-310211-m03": nodes "ha-310211-m03" not found
	I0919 19:11:46.718438  353502 pod_ready.go:39] duration metric: took 7.426956409s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 19:11:46.718461  353502 api_server.go:52] waiting for apiserver process to appear ...
	I0919 19:11:46.718525  353502 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 19:11:46.732189  353502 api_server.go:72] duration metric: took 28.997245979s to wait for apiserver process to appear ...
	I0919 19:11:46.732215  353502 api_server.go:88] waiting for apiserver healthz status ...
	I0919 19:11:46.732236  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:11:46.745082  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:11:46.745113  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:11:47.232622  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:11:47.240602  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:11:47.240628  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:11:47.733214  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:11:47.740879  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:11:47.740910  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:11:48.232891  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:11:48.240796  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:11:48.240828  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:11:48.732358  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:11:48.741200  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:11:48.741289  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:11:49.232823  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:11:49.241166  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:11:49.241197  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:11:49.732909  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:11:49.744020  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:11:49.744049  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:11:50.232404  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:11:50.240432  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:11:50.240462  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:11:50.733087  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:11:50.740775  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:11:50.740806  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:11:51.233264  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:11:51.240890  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:11:51.240917  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:11:51.732362  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:11:51.740210  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:11:51.740240  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:11:52.232892  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:11:52.240648  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:11:52.240680  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:11:52.733253  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:11:52.740949  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:11:52.740989  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:11:53.232842  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:11:53.241730  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:11:53.241761  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:11:53.733283  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:11:53.741027  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:11:53.741055  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:11:54.232430  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:11:54.241736  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:11:54.241781  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:11:54.733275  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:11:54.741382  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:11:54.741412  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:11:55.233047  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:11:55.243894  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:11:55.243926  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:11:55.733261  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:11:55.741195  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:11:55.741227  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:11:56.232632  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:11:56.240323  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:11:56.240353  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:11:56.733128  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:11:56.740812  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:11:56.740855  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:11:57.232360  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:11:57.240226  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:11:57.240255  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:11:57.732836  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:11:57.740573  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:11:57.740607  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:11:58.232426  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:11:58.240147  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:11:58.240177  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:11:58.732839  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:11:58.740675  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:11:58.740707  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:11:59.233276  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:11:59.240896  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:11:59.240925  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:11:59.733221  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:11:59.750186  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:11:59.750222  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:12:00.232928  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:12:00.275699  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:12:00.275730  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:12:00.733261  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:12:00.741656  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:12:00.741692  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:12:01.233301  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:12:01.241463  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:12:01.241557  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:12:01.732753  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:12:01.819491  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:12:01.819519  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:12:02.233023  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:12:02.241247  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:12:02.241278  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:12:02.732646  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:12:02.742469  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:12:02.742505  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:12:03.233309  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:12:03.243221  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:12:03.243321  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:12:03.732891  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:12:03.743439  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:12:03.743470  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:12:04.233138  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:12:04.241889  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:12:04.241973  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:12:04.732380  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:12:04.740084  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:12:04.740121  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:12:05.232383  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:12:05.241075  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:12:05.241108  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:12:05.732797  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:12:05.742003  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:12:05.742033  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:12:06.232709  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:12:06.240381  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:12:06.240412  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:12:06.733271  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:12:06.742415  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:12:06.742449  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:12:07.233127  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:12:07.240782  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:12:07.240811  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:12:07.733297  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:12:07.757314  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:12:07.757346  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:12:08.233191  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:12:08.240946  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:12:08.240973  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:12:08.732372  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:12:08.740149  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:12:08.740181  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:12:09.232387  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:12:09.240178  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:12:09.240212  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:12:09.732718  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:12:09.740669  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:12:09.740701  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:12:10.232370  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:12:10.240151  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:12:10.240183  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:12:10.732360  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:12:10.740055  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:12:10.740086  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:12:11.233291  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:12:11.240993  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:12:11.241022  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:12:11.733270  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:12:11.741311  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:12:11.741349  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:12:12.232944  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:12:12.240733  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:12:12.240766  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:12:12.733333  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:12:12.741234  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:12:12.741281  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:12:13.233011  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:12:13.240793  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:12:13.240827  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:12:13.733182  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:12:13.775651  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:12:13.775687  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:12:14.233229  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:12:14.241670  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:12:14.241704  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:12:14.733290  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:12:14.744187  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:12:14.744225  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:12:15.232962  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:12:15.242579  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:12:15.242698  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:12:15.733284  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:12:15.741706  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:12:15.741731  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:12:16.233265  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:12:16.241114  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:12:16.241150  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:12:16.732425  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:12:16.740329  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:12:16.740359  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:12:17.233037  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:12:17.242358  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:12:17.242403  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:12:17.732714  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:12:17.742047  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 19:12:17.742129  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 19:12:18.232930  353502 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 19:12:18.233029  353502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 19:12:18.284995  353502 cri.go:89] found id: "6f5de2b1f87b6d3fb69f8c70c6099516935ad5f09f89678bceb7c5025c5fb7d9"
	I0919 19:12:18.285024  353502 cri.go:89] found id: "8c7ba52f0a1bf3188237e09b7e1fdb3d0d81c5fa3bb40bde0f2470807cd98f38"
	I0919 19:12:18.285029  353502 cri.go:89] found id: ""
	I0919 19:12:18.285037  353502 logs.go:276] 2 containers: [6f5de2b1f87b6d3fb69f8c70c6099516935ad5f09f89678bceb7c5025c5fb7d9 8c7ba52f0a1bf3188237e09b7e1fdb3d0d81c5fa3bb40bde0f2470807cd98f38]
	I0919 19:12:18.285094  353502 ssh_runner.go:195] Run: which crictl
	I0919 19:12:18.289655  353502 ssh_runner.go:195] Run: which crictl
	I0919 19:12:18.294161  353502 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 19:12:18.294237  353502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 19:12:18.333219  353502 cri.go:89] found id: "c0f682becd7aa45a29989f5a73314f2646b85b6d01a10f7dfa1034a931f34b63"
	I0919 19:12:18.333243  353502 cri.go:89] found id: "001d6a43f36989685a16d98ca20517aee27e4c6aba6402834c627bd88ca93ae2"
	I0919 19:12:18.333248  353502 cri.go:89] found id: ""
	I0919 19:12:18.333256  353502 logs.go:276] 2 containers: [c0f682becd7aa45a29989f5a73314f2646b85b6d01a10f7dfa1034a931f34b63 001d6a43f36989685a16d98ca20517aee27e4c6aba6402834c627bd88ca93ae2]
	I0919 19:12:18.333317  353502 ssh_runner.go:195] Run: which crictl
	I0919 19:12:18.336971  353502 ssh_runner.go:195] Run: which crictl
	I0919 19:12:18.340512  353502 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 19:12:18.340587  353502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 19:12:18.382214  353502 cri.go:89] found id: ""
	I0919 19:12:18.382234  353502 logs.go:276] 0 containers: []
	W0919 19:12:18.382243  353502 logs.go:278] No container was found matching "coredns"
	I0919 19:12:18.382249  353502 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 19:12:18.382310  353502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 19:12:18.424266  353502 cri.go:89] found id: "771c84c22a8b4052881e43fc27cc5d1cf50f9f34609e894ba9f5455659307e8d"
	I0919 19:12:18.424292  353502 cri.go:89] found id: "4ab0e6d5a67fab9eee35edb45c61eef329dd3df7567d779bc74525e20d210be6"
	I0919 19:12:18.424298  353502 cri.go:89] found id: ""
	I0919 19:12:18.424306  353502 logs.go:276] 2 containers: [771c84c22a8b4052881e43fc27cc5d1cf50f9f34609e894ba9f5455659307e8d 4ab0e6d5a67fab9eee35edb45c61eef329dd3df7567d779bc74525e20d210be6]
	I0919 19:12:18.424363  353502 ssh_runner.go:195] Run: which crictl
	I0919 19:12:18.428193  353502 ssh_runner.go:195] Run: which crictl
	I0919 19:12:18.431631  353502 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 19:12:18.431724  353502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 19:12:18.469239  353502 cri.go:89] found id: "4979771502be7375369db6e3f161fd81080096935c2b7bef59456096e9bae05f"
	I0919 19:12:18.469265  353502 cri.go:89] found id: ""
	I0919 19:12:18.469273  353502 logs.go:276] 1 containers: [4979771502be7375369db6e3f161fd81080096935c2b7bef59456096e9bae05f]
	I0919 19:12:18.469329  353502 ssh_runner.go:195] Run: which crictl
	I0919 19:12:18.474065  353502 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 19:12:18.474170  353502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 19:12:18.519046  353502 cri.go:89] found id: "0947e5fb00709aecf601093126d5a8ad7182483da8f0271a2a915cf0ba641c99"
	I0919 19:12:18.519069  353502 cri.go:89] found id: "828f6f3f7c34dd9b400a0c447b370eb39ba92e202ff346269dfaf1e8ead92762"
	I0919 19:12:18.519075  353502 cri.go:89] found id: ""
	I0919 19:12:18.519083  353502 logs.go:276] 2 containers: [0947e5fb00709aecf601093126d5a8ad7182483da8f0271a2a915cf0ba641c99 828f6f3f7c34dd9b400a0c447b370eb39ba92e202ff346269dfaf1e8ead92762]
	I0919 19:12:18.519138  353502 ssh_runner.go:195] Run: which crictl
	I0919 19:12:18.522819  353502 ssh_runner.go:195] Run: which crictl
	I0919 19:12:18.526196  353502 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 19:12:18.526270  353502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 19:12:18.565677  353502 cri.go:89] found id: "079dc20df5657ff85679c9d0bdc4798bc707275355707c038c89e946da5456a9"
	I0919 19:12:18.565702  353502 cri.go:89] found id: ""
	I0919 19:12:18.565710  353502 logs.go:276] 1 containers: [079dc20df5657ff85679c9d0bdc4798bc707275355707c038c89e946da5456a9]
	I0919 19:12:18.565775  353502 ssh_runner.go:195] Run: which crictl
	I0919 19:12:18.569392  353502 logs.go:123] Gathering logs for kube-controller-manager [0947e5fb00709aecf601093126d5a8ad7182483da8f0271a2a915cf0ba641c99] ...
	I0919 19:12:18.569419  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0947e5fb00709aecf601093126d5a8ad7182483da8f0271a2a915cf0ba641c99"
	I0919 19:12:18.629441  353502 logs.go:123] Gathering logs for etcd [c0f682becd7aa45a29989f5a73314f2646b85b6d01a10f7dfa1034a931f34b63] ...
	I0919 19:12:18.629497  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0f682becd7aa45a29989f5a73314f2646b85b6d01a10f7dfa1034a931f34b63"
	I0919 19:12:18.679429  353502 logs.go:123] Gathering logs for kube-scheduler [771c84c22a8b4052881e43fc27cc5d1cf50f9f34609e894ba9f5455659307e8d] ...
	I0919 19:12:18.679463  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 771c84c22a8b4052881e43fc27cc5d1cf50f9f34609e894ba9f5455659307e8d"
	I0919 19:12:18.753557  353502 logs.go:123] Gathering logs for kube-scheduler [4ab0e6d5a67fab9eee35edb45c61eef329dd3df7567d779bc74525e20d210be6] ...
	I0919 19:12:18.753594  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ab0e6d5a67fab9eee35edb45c61eef329dd3df7567d779bc74525e20d210be6"
	I0919 19:12:18.795902  353502 logs.go:123] Gathering logs for CRI-O ...
	I0919 19:12:18.795946  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 19:12:18.872286  353502 logs.go:123] Gathering logs for container status ...
	I0919 19:12:18.872334  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 19:12:18.945144  353502 logs.go:123] Gathering logs for kubelet ...
	I0919 19:12:18.945238  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 19:12:19.016598  353502 logs.go:123] Gathering logs for describe nodes ...
	I0919 19:12:19.016677  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 19:12:19.330169  353502 logs.go:123] Gathering logs for kube-proxy [4979771502be7375369db6e3f161fd81080096935c2b7bef59456096e9bae05f] ...
	I0919 19:12:19.330211  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4979771502be7375369db6e3f161fd81080096935c2b7bef59456096e9bae05f"
	I0919 19:12:19.373711  353502 logs.go:123] Gathering logs for kube-controller-manager [828f6f3f7c34dd9b400a0c447b370eb39ba92e202ff346269dfaf1e8ead92762] ...
	I0919 19:12:19.373743  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 828f6f3f7c34dd9b400a0c447b370eb39ba92e202ff346269dfaf1e8ead92762"
	I0919 19:12:19.416500  353502 logs.go:123] Gathering logs for dmesg ...
	I0919 19:12:19.416529  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 19:12:19.434473  353502 logs.go:123] Gathering logs for kube-apiserver [6f5de2b1f87b6d3fb69f8c70c6099516935ad5f09f89678bceb7c5025c5fb7d9] ...
	I0919 19:12:19.434501  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f5de2b1f87b6d3fb69f8c70c6099516935ad5f09f89678bceb7c5025c5fb7d9"
	I0919 19:12:19.492527  353502 logs.go:123] Gathering logs for kube-apiserver [8c7ba52f0a1bf3188237e09b7e1fdb3d0d81c5fa3bb40bde0f2470807cd98f38] ...
	I0919 19:12:19.492561  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c7ba52f0a1bf3188237e09b7e1fdb3d0d81c5fa3bb40bde0f2470807cd98f38"
	I0919 19:12:19.540427  353502 logs.go:123] Gathering logs for etcd [001d6a43f36989685a16d98ca20517aee27e4c6aba6402834c627bd88ca93ae2] ...
	I0919 19:12:19.540461  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 001d6a43f36989685a16d98ca20517aee27e4c6aba6402834c627bd88ca93ae2"
	I0919 19:12:19.599804  353502 logs.go:123] Gathering logs for kindnet [079dc20df5657ff85679c9d0bdc4798bc707275355707c038c89e946da5456a9] ...
	I0919 19:12:19.599841  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 079dc20df5657ff85679c9d0bdc4798bc707275355707c038c89e946da5456a9"
	I0919 19:12:22.142612  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:12:23.091195  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0919 19:12:23.091274  353502 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0919 19:12:23.091384  353502 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 19:12:23.091480  353502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 19:12:23.181743  353502 cri.go:89] found id: "6f5de2b1f87b6d3fb69f8c70c6099516935ad5f09f89678bceb7c5025c5fb7d9"
	I0919 19:12:23.181767  353502 cri.go:89] found id: "8c7ba52f0a1bf3188237e09b7e1fdb3d0d81c5fa3bb40bde0f2470807cd98f38"
	I0919 19:12:23.181786  353502 cri.go:89] found id: ""
	I0919 19:12:23.181795  353502 logs.go:276] 2 containers: [6f5de2b1f87b6d3fb69f8c70c6099516935ad5f09f89678bceb7c5025c5fb7d9 8c7ba52f0a1bf3188237e09b7e1fdb3d0d81c5fa3bb40bde0f2470807cd98f38]
	I0919 19:12:23.181851  353502 ssh_runner.go:195] Run: which crictl
	I0919 19:12:23.186131  353502 ssh_runner.go:195] Run: which crictl
	I0919 19:12:23.191743  353502 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 19:12:23.191814  353502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 19:12:23.248867  353502 cri.go:89] found id: "c0f682becd7aa45a29989f5a73314f2646b85b6d01a10f7dfa1034a931f34b63"
	I0919 19:12:23.248940  353502 cri.go:89] found id: "001d6a43f36989685a16d98ca20517aee27e4c6aba6402834c627bd88ca93ae2"
	I0919 19:12:23.248959  353502 cri.go:89] found id: ""
	I0919 19:12:23.248981  353502 logs.go:276] 2 containers: [c0f682becd7aa45a29989f5a73314f2646b85b6d01a10f7dfa1034a931f34b63 001d6a43f36989685a16d98ca20517aee27e4c6aba6402834c627bd88ca93ae2]
	I0919 19:12:23.249071  353502 ssh_runner.go:195] Run: which crictl
	I0919 19:12:23.252578  353502 ssh_runner.go:195] Run: which crictl
	I0919 19:12:23.258155  353502 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 19:12:23.258280  353502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 19:12:23.320472  353502 cri.go:89] found id: ""
	I0919 19:12:23.320555  353502 logs.go:276] 0 containers: []
	W0919 19:12:23.320574  353502 logs.go:278] No container was found matching "coredns"
	I0919 19:12:23.320582  353502 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 19:12:23.320658  353502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 19:12:23.372855  353502 cri.go:89] found id: "771c84c22a8b4052881e43fc27cc5d1cf50f9f34609e894ba9f5455659307e8d"
	I0919 19:12:23.372925  353502 cri.go:89] found id: "4ab0e6d5a67fab9eee35edb45c61eef329dd3df7567d779bc74525e20d210be6"
	I0919 19:12:23.372937  353502 cri.go:89] found id: ""
	I0919 19:12:23.372945  353502 logs.go:276] 2 containers: [771c84c22a8b4052881e43fc27cc5d1cf50f9f34609e894ba9f5455659307e8d 4ab0e6d5a67fab9eee35edb45c61eef329dd3df7567d779bc74525e20d210be6]
	I0919 19:12:23.373007  353502 ssh_runner.go:195] Run: which crictl
	I0919 19:12:23.376814  353502 ssh_runner.go:195] Run: which crictl
	I0919 19:12:23.381454  353502 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 19:12:23.381529  353502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 19:12:23.418857  353502 cri.go:89] found id: "4979771502be7375369db6e3f161fd81080096935c2b7bef59456096e9bae05f"
	I0919 19:12:23.418880  353502 cri.go:89] found id: ""
	I0919 19:12:23.418889  353502 logs.go:276] 1 containers: [4979771502be7375369db6e3f161fd81080096935c2b7bef59456096e9bae05f]
	I0919 19:12:23.419012  353502 ssh_runner.go:195] Run: which crictl
	I0919 19:12:23.423294  353502 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 19:12:23.423395  353502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 19:12:23.460965  353502 cri.go:89] found id: "0947e5fb00709aecf601093126d5a8ad7182483da8f0271a2a915cf0ba641c99"
	I0919 19:12:23.460986  353502 cri.go:89] found id: "828f6f3f7c34dd9b400a0c447b370eb39ba92e202ff346269dfaf1e8ead92762"
	I0919 19:12:23.460991  353502 cri.go:89] found id: ""
	I0919 19:12:23.460999  353502 logs.go:276] 2 containers: [0947e5fb00709aecf601093126d5a8ad7182483da8f0271a2a915cf0ba641c99 828f6f3f7c34dd9b400a0c447b370eb39ba92e202ff346269dfaf1e8ead92762]
	I0919 19:12:23.461061  353502 ssh_runner.go:195] Run: which crictl
	I0919 19:12:23.464671  353502 ssh_runner.go:195] Run: which crictl
	I0919 19:12:23.468158  353502 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 19:12:23.468241  353502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 19:12:23.506509  353502 cri.go:89] found id: "079dc20df5657ff85679c9d0bdc4798bc707275355707c038c89e946da5456a9"
	I0919 19:12:23.506531  353502 cri.go:89] found id: ""
	I0919 19:12:23.506540  353502 logs.go:276] 1 containers: [079dc20df5657ff85679c9d0bdc4798bc707275355707c038c89e946da5456a9]
	I0919 19:12:23.506600  353502 ssh_runner.go:195] Run: which crictl
	I0919 19:12:23.510263  353502 logs.go:123] Gathering logs for describe nodes ...
	I0919 19:12:23.510291  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 19:12:23.753946  353502 logs.go:123] Gathering logs for kube-scheduler [771c84c22a8b4052881e43fc27cc5d1cf50f9f34609e894ba9f5455659307e8d] ...
	I0919 19:12:23.753983  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 771c84c22a8b4052881e43fc27cc5d1cf50f9f34609e894ba9f5455659307e8d"
	I0919 19:12:23.835901  353502 logs.go:123] Gathering logs for kube-scheduler [4ab0e6d5a67fab9eee35edb45c61eef329dd3df7567d779bc74525e20d210be6] ...
	I0919 19:12:23.835942  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ab0e6d5a67fab9eee35edb45c61eef329dd3df7567d779bc74525e20d210be6"
	I0919 19:12:23.887886  353502 logs.go:123] Gathering logs for CRI-O ...
	I0919 19:12:23.887973  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 19:12:23.978704  353502 logs.go:123] Gathering logs for kube-controller-manager [0947e5fb00709aecf601093126d5a8ad7182483da8f0271a2a915cf0ba641c99] ...
	I0919 19:12:23.978739  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0947e5fb00709aecf601093126d5a8ad7182483da8f0271a2a915cf0ba641c99"
	I0919 19:12:24.080890  353502 logs.go:123] Gathering logs for container status ...
	I0919 19:12:24.080932  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 19:12:24.185749  353502 logs.go:123] Gathering logs for kubelet ...
	I0919 19:12:24.185849  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 19:12:24.301573  353502 logs.go:123] Gathering logs for dmesg ...
	I0919 19:12:24.301603  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 19:12:24.319923  353502 logs.go:123] Gathering logs for etcd [c0f682becd7aa45a29989f5a73314f2646b85b6d01a10f7dfa1034a931f34b63] ...
	I0919 19:12:24.319999  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0f682becd7aa45a29989f5a73314f2646b85b6d01a10f7dfa1034a931f34b63"
	I0919 19:12:24.393777  353502 logs.go:123] Gathering logs for etcd [001d6a43f36989685a16d98ca20517aee27e4c6aba6402834c627bd88ca93ae2] ...
	I0919 19:12:24.393857  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 001d6a43f36989685a16d98ca20517aee27e4c6aba6402834c627bd88ca93ae2"
	I0919 19:12:24.459122  353502 logs.go:123] Gathering logs for kube-proxy [4979771502be7375369db6e3f161fd81080096935c2b7bef59456096e9bae05f] ...
	I0919 19:12:24.459156  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4979771502be7375369db6e3f161fd81080096935c2b7bef59456096e9bae05f"
	I0919 19:12:24.516609  353502 logs.go:123] Gathering logs for kube-apiserver [6f5de2b1f87b6d3fb69f8c70c6099516935ad5f09f89678bceb7c5025c5fb7d9] ...
	I0919 19:12:24.516638  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f5de2b1f87b6d3fb69f8c70c6099516935ad5f09f89678bceb7c5025c5fb7d9"
	I0919 19:12:24.578242  353502 logs.go:123] Gathering logs for kube-apiserver [8c7ba52f0a1bf3188237e09b7e1fdb3d0d81c5fa3bb40bde0f2470807cd98f38] ...
	I0919 19:12:24.578279  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c7ba52f0a1bf3188237e09b7e1fdb3d0d81c5fa3bb40bde0f2470807cd98f38"
	I0919 19:12:24.621739  353502 logs.go:123] Gathering logs for kube-controller-manager [828f6f3f7c34dd9b400a0c447b370eb39ba92e202ff346269dfaf1e8ead92762] ...
	I0919 19:12:24.621768  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 828f6f3f7c34dd9b400a0c447b370eb39ba92e202ff346269dfaf1e8ead92762"
	I0919 19:12:24.666831  353502 logs.go:123] Gathering logs for kindnet [079dc20df5657ff85679c9d0bdc4798bc707275355707c038c89e946da5456a9] ...
	I0919 19:12:24.666863  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 079dc20df5657ff85679c9d0bdc4798bc707275355707c038c89e946da5456a9"
	I0919 19:12:27.214826  353502 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 19:12:27.224805  353502 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 19:12:27.224879  353502 round_trippers.go:463] GET https://192.168.49.2:8443/version
	I0919 19:12:27.224896  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:27.224906  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:27.224919  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:27.239324  353502 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0919 19:12:27.239484  353502 api_server.go:141] control plane version: v1.31.1
	I0919 19:12:27.239512  353502 api_server.go:131] duration metric: took 40.507286035s to wait for apiserver health ...
	I0919 19:12:27.239522  353502 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 19:12:27.239546  353502 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 19:12:27.239621  353502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 19:12:27.302415  353502 cri.go:89] found id: "6f5de2b1f87b6d3fb69f8c70c6099516935ad5f09f89678bceb7c5025c5fb7d9"
	I0919 19:12:27.302438  353502 cri.go:89] found id: "8c7ba52f0a1bf3188237e09b7e1fdb3d0d81c5fa3bb40bde0f2470807cd98f38"
	I0919 19:12:27.302442  353502 cri.go:89] found id: ""
	I0919 19:12:27.302451  353502 logs.go:276] 2 containers: [6f5de2b1f87b6d3fb69f8c70c6099516935ad5f09f89678bceb7c5025c5fb7d9 8c7ba52f0a1bf3188237e09b7e1fdb3d0d81c5fa3bb40bde0f2470807cd98f38]
	I0919 19:12:27.302520  353502 ssh_runner.go:195] Run: which crictl
	I0919 19:12:27.306842  353502 ssh_runner.go:195] Run: which crictl
	I0919 19:12:27.311131  353502 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 19:12:27.311237  353502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 19:12:27.349245  353502 cri.go:89] found id: "c0f682becd7aa45a29989f5a73314f2646b85b6d01a10f7dfa1034a931f34b63"
	I0919 19:12:27.349271  353502 cri.go:89] found id: "001d6a43f36989685a16d98ca20517aee27e4c6aba6402834c627bd88ca93ae2"
	I0919 19:12:27.349276  353502 cri.go:89] found id: ""
	I0919 19:12:27.349284  353502 logs.go:276] 2 containers: [c0f682becd7aa45a29989f5a73314f2646b85b6d01a10f7dfa1034a931f34b63 001d6a43f36989685a16d98ca20517aee27e4c6aba6402834c627bd88ca93ae2]
	I0919 19:12:27.349341  353502 ssh_runner.go:195] Run: which crictl
	I0919 19:12:27.353108  353502 ssh_runner.go:195] Run: which crictl
	I0919 19:12:27.356571  353502 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 19:12:27.356643  353502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 19:12:27.398518  353502 cri.go:89] found id: ""
	I0919 19:12:27.398614  353502 logs.go:276] 0 containers: []
	W0919 19:12:27.398639  353502 logs.go:278] No container was found matching "coredns"
	I0919 19:12:27.398675  353502 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 19:12:27.398789  353502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 19:12:27.437504  353502 cri.go:89] found id: "771c84c22a8b4052881e43fc27cc5d1cf50f9f34609e894ba9f5455659307e8d"
	I0919 19:12:27.437571  353502 cri.go:89] found id: "4ab0e6d5a67fab9eee35edb45c61eef329dd3df7567d779bc74525e20d210be6"
	I0919 19:12:27.437592  353502 cri.go:89] found id: ""
	I0919 19:12:27.437635  353502 logs.go:276] 2 containers: [771c84c22a8b4052881e43fc27cc5d1cf50f9f34609e894ba9f5455659307e8d 4ab0e6d5a67fab9eee35edb45c61eef329dd3df7567d779bc74525e20d210be6]
	I0919 19:12:27.437716  353502 ssh_runner.go:195] Run: which crictl
	I0919 19:12:27.441451  353502 ssh_runner.go:195] Run: which crictl
	I0919 19:12:27.445030  353502 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 19:12:27.445104  353502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 19:12:27.486373  353502 cri.go:89] found id: "4979771502be7375369db6e3f161fd81080096935c2b7bef59456096e9bae05f"
	I0919 19:12:27.486395  353502 cri.go:89] found id: ""
	I0919 19:12:27.486404  353502 logs.go:276] 1 containers: [4979771502be7375369db6e3f161fd81080096935c2b7bef59456096e9bae05f]
	I0919 19:12:27.486485  353502 ssh_runner.go:195] Run: which crictl
	I0919 19:12:27.490011  353502 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 19:12:27.490089  353502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 19:12:27.532909  353502 cri.go:89] found id: "0947e5fb00709aecf601093126d5a8ad7182483da8f0271a2a915cf0ba641c99"
	I0919 19:12:27.532944  353502 cri.go:89] found id: "828f6f3f7c34dd9b400a0c447b370eb39ba92e202ff346269dfaf1e8ead92762"
	I0919 19:12:27.532957  353502 cri.go:89] found id: ""
	I0919 19:12:27.532964  353502 logs.go:276] 2 containers: [0947e5fb00709aecf601093126d5a8ad7182483da8f0271a2a915cf0ba641c99 828f6f3f7c34dd9b400a0c447b370eb39ba92e202ff346269dfaf1e8ead92762]
	I0919 19:12:27.533034  353502 ssh_runner.go:195] Run: which crictl
	I0919 19:12:27.536841  353502 ssh_runner.go:195] Run: which crictl
	I0919 19:12:27.540356  353502 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 19:12:27.540482  353502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 19:12:27.582886  353502 cri.go:89] found id: "079dc20df5657ff85679c9d0bdc4798bc707275355707c038c89e946da5456a9"
	I0919 19:12:27.582910  353502 cri.go:89] found id: ""
	I0919 19:12:27.582917  353502 logs.go:276] 1 containers: [079dc20df5657ff85679c9d0bdc4798bc707275355707c038c89e946da5456a9]
	I0919 19:12:27.582974  353502 ssh_runner.go:195] Run: which crictl
	I0919 19:12:27.586881  353502 logs.go:123] Gathering logs for kube-apiserver [8c7ba52f0a1bf3188237e09b7e1fdb3d0d81c5fa3bb40bde0f2470807cd98f38] ...
	I0919 19:12:27.586951  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c7ba52f0a1bf3188237e09b7e1fdb3d0d81c5fa3bb40bde0f2470807cd98f38"
	I0919 19:12:27.637784  353502 logs.go:123] Gathering logs for etcd [c0f682becd7aa45a29989f5a73314f2646b85b6d01a10f7dfa1034a931f34b63] ...
	I0919 19:12:27.637814  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0f682becd7aa45a29989f5a73314f2646b85b6d01a10f7dfa1034a931f34b63"
	I0919 19:12:27.698883  353502 logs.go:123] Gathering logs for etcd [001d6a43f36989685a16d98ca20517aee27e4c6aba6402834c627bd88ca93ae2] ...
	I0919 19:12:27.698918  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 001d6a43f36989685a16d98ca20517aee27e4c6aba6402834c627bd88ca93ae2"
	I0919 19:12:27.765914  353502 logs.go:123] Gathering logs for kube-scheduler [771c84c22a8b4052881e43fc27cc5d1cf50f9f34609e894ba9f5455659307e8d] ...
	I0919 19:12:27.765954  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 771c84c22a8b4052881e43fc27cc5d1cf50f9f34609e894ba9f5455659307e8d"
	I0919 19:12:27.829217  353502 logs.go:123] Gathering logs for kube-controller-manager [0947e5fb00709aecf601093126d5a8ad7182483da8f0271a2a915cf0ba641c99] ...
	I0919 19:12:27.829252  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0947e5fb00709aecf601093126d5a8ad7182483da8f0271a2a915cf0ba641c99"
	I0919 19:12:27.914854  353502 logs.go:123] Gathering logs for kube-controller-manager [828f6f3f7c34dd9b400a0c447b370eb39ba92e202ff346269dfaf1e8ead92762] ...
	I0919 19:12:27.914889  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 828f6f3f7c34dd9b400a0c447b370eb39ba92e202ff346269dfaf1e8ead92762"
	I0919 19:12:27.965688  353502 logs.go:123] Gathering logs for kubelet ...
	I0919 19:12:27.965769  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 19:12:28.037706  353502 logs.go:123] Gathering logs for kube-apiserver [6f5de2b1f87b6d3fb69f8c70c6099516935ad5f09f89678bceb7c5025c5fb7d9] ...
	I0919 19:12:28.037746  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f5de2b1f87b6d3fb69f8c70c6099516935ad5f09f89678bceb7c5025c5fb7d9"
	I0919 19:12:28.089849  353502 logs.go:123] Gathering logs for container status ...
	I0919 19:12:28.089882  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 19:12:28.133396  353502 logs.go:123] Gathering logs for describe nodes ...
	I0919 19:12:28.133429  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 19:12:28.397864  353502 logs.go:123] Gathering logs for kindnet [079dc20df5657ff85679c9d0bdc4798bc707275355707c038c89e946da5456a9] ...
	I0919 19:12:28.397900  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 079dc20df5657ff85679c9d0bdc4798bc707275355707c038c89e946da5456a9"
	I0919 19:12:28.446958  353502 logs.go:123] Gathering logs for kube-proxy [4979771502be7375369db6e3f161fd81080096935c2b7bef59456096e9bae05f] ...
	I0919 19:12:28.446989  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4979771502be7375369db6e3f161fd81080096935c2b7bef59456096e9bae05f"
	I0919 19:12:28.495849  353502 logs.go:123] Gathering logs for CRI-O ...
	I0919 19:12:28.495877  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 19:12:28.565461  353502 logs.go:123] Gathering logs for dmesg ...
	I0919 19:12:28.565499  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 19:12:28.582859  353502 logs.go:123] Gathering logs for kube-scheduler [4ab0e6d5a67fab9eee35edb45c61eef329dd3df7567d779bc74525e20d210be6] ...
	I0919 19:12:28.582889  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ab0e6d5a67fab9eee35edb45c61eef329dd3df7567d779bc74525e20d210be6"
	I0919 19:12:31.124964  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0919 19:12:31.124989  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:31.125000  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:31.125004  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:31.133467  353502 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0919 19:12:31.143330  353502 system_pods.go:59] 26 kube-system pods found
	I0919 19:12:31.143382  353502 system_pods.go:61] "coredns-7c65d6cfc9-drds6" [2b987fab-d3db-4cb5-a108-94e5748d7155] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 19:12:31.143392  353502 system_pods.go:61] "coredns-7c65d6cfc9-rcmrq" [993b1524-6380-4c53-bfe8-ce96a07f61b1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 19:12:31.143399  353502 system_pods.go:61] "etcd-ha-310211" [1c5dad04-8f29-435c-b863-6e439bc571d0] Running
	I0919 19:12:31.143404  353502 system_pods.go:61] "etcd-ha-310211-m02" [5089869e-2dcb-499b-b594-b1f909667e9c] Running
	I0919 19:12:31.143409  353502 system_pods.go:61] "etcd-ha-310211-m03" [51baad43-04de-497b-b6a7-012a9b2abf03] Running
	I0919 19:12:31.143414  353502 system_pods.go:61] "kindnet-b57tk" [859b7cff-cdfa-4f17-af48-03083166ca8f] Running
	I0919 19:12:31.143418  353502 system_pods.go:61] "kindnet-f97kj" [12fb54d7-3c66-4aec-b629-5c1ba1db41bd] Running
	I0919 19:12:31.143423  353502 system_pods.go:61] "kindnet-g4zg9" [f79cc9fc-590f-464f-aac1-5e44f4358596] Running
	I0919 19:12:31.143427  353502 system_pods.go:61] "kindnet-vhvq2" [2f51dc06-5e2d-4a34-ab88-c55b5fcea1c4] Running
	I0919 19:12:31.143433  353502 system_pods.go:61] "kube-apiserver-ha-310211" [c51c3488-bc7a-43bd-8294-07037340e044] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 19:12:31.143442  353502 system_pods.go:61] "kube-apiserver-ha-310211-m02" [aece3230-105d-49b8-aa76-d64f2dc051f3] Running
	I0919 19:12:31.143447  353502 system_pods.go:61] "kube-apiserver-ha-310211-m03" [103d9f6c-6c16-4a0d-886b-51d0888ab60f] Running
	I0919 19:12:31.143453  353502 system_pods.go:61] "kube-controller-manager-ha-310211" [0ab8b484-5569-401b-b8f0-14b26ff2161f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 19:12:31.143463  353502 system_pods.go:61] "kube-controller-manager-ha-310211-m02" [74cc9912-68c6-4699-8779-4047c54a9e96] Running
	I0919 19:12:31.143468  353502 system_pods.go:61] "kube-controller-manager-ha-310211-m03" [12220c38-1c8a-4970-acce-8b4513f8a47a] Running
	I0919 19:12:31.143476  353502 system_pods.go:61] "kube-proxy-9jg6c" [001dc3f6-3ef5-425b-a0c3-9ff6b8d3aeff] Running
	I0919 19:12:31.143486  353502 system_pods.go:61] "kube-proxy-f2xdc" [2c7ec6d2-ee1a-43bd-9db9-a1a9741deaae] Running
	I0919 19:12:31.143490  353502 system_pods.go:61] "kube-proxy-lbfq4" [d612bab1-080f-4146-ba26-9c5aaa602a98] Running
	I0919 19:12:31.143494  353502 system_pods.go:61] "kube-proxy-vsrc4" [16d78326-3695-4224-a000-bb79087903ce] Running
	I0919 19:12:31.143498  353502 system_pods.go:61] "kube-scheduler-ha-310211" [95e09d69-6245-43f1-9738-37cd7e439fb7] Running
	I0919 19:12:31.143502  353502 system_pods.go:61] "kube-scheduler-ha-310211-m02" [cb90e3ae-283d-442a-9d1f-f8582d2872ec] Running
	I0919 19:12:31.143505  353502 system_pods.go:61] "kube-scheduler-ha-310211-m03" [bb3d3bcd-5b74-4bed-a119-4fc0b683d45d] Running
	I0919 19:12:31.143509  353502 system_pods.go:61] "kube-vip-ha-310211" [c2d094a2-aa9c-4a71-9cfb-3dcc17812d7a] Running
	I0919 19:12:31.143513  353502 system_pods.go:61] "kube-vip-ha-310211-m02" [4a59a1a3-a4f3-41a2-ac7e-6bdc9b875b70] Running
	I0919 19:12:31.143521  353502 system_pods.go:61] "kube-vip-ha-310211-m03" [2a2f004f-f4f4-4fa6-be23-db7f7b8a0109] Running
	I0919 19:12:31.143524  353502 system_pods.go:61] "storage-provisioner" [0658ad3b-68f1-4a20-b125-5d4f759ab3e4] Running
	I0919 19:12:31.143534  353502 system_pods.go:74] duration metric: took 3.904005126s to wait for pod list to return data ...
	I0919 19:12:31.143549  353502 default_sa.go:34] waiting for default service account to be created ...
	I0919 19:12:31.143716  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0919 19:12:31.143744  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:31.143753  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:31.143758  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:31.147537  353502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:12:31.147836  353502 default_sa.go:45] found service account: "default"
	I0919 19:12:31.147855  353502 default_sa.go:55] duration metric: took 4.298788ms for default service account to be created ...
	I0919 19:12:31.147866  353502 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 19:12:31.147965  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0919 19:12:31.147978  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:31.147987  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:31.148005  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:31.153475  353502 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0919 19:12:31.163302  353502 system_pods.go:86] 26 kube-system pods found
	I0919 19:12:31.163350  353502 system_pods.go:89] "coredns-7c65d6cfc9-drds6" [2b987fab-d3db-4cb5-a108-94e5748d7155] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 19:12:31.163362  353502 system_pods.go:89] "coredns-7c65d6cfc9-rcmrq" [993b1524-6380-4c53-bfe8-ce96a07f61b1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 19:12:31.163398  353502 system_pods.go:89] "etcd-ha-310211" [1c5dad04-8f29-435c-b863-6e439bc571d0] Running
	I0919 19:12:31.163412  353502 system_pods.go:89] "etcd-ha-310211-m02" [5089869e-2dcb-499b-b594-b1f909667e9c] Running
	I0919 19:12:31.163423  353502 system_pods.go:89] "etcd-ha-310211-m03" [51baad43-04de-497b-b6a7-012a9b2abf03] Running
	I0919 19:12:31.163428  353502 system_pods.go:89] "kindnet-b57tk" [859b7cff-cdfa-4f17-af48-03083166ca8f] Running
	I0919 19:12:31.163434  353502 system_pods.go:89] "kindnet-f97kj" [12fb54d7-3c66-4aec-b629-5c1ba1db41bd] Running
	I0919 19:12:31.163441  353502 system_pods.go:89] "kindnet-g4zg9" [f79cc9fc-590f-464f-aac1-5e44f4358596] Running
	I0919 19:12:31.163446  353502 system_pods.go:89] "kindnet-vhvq2" [2f51dc06-5e2d-4a34-ab88-c55b5fcea1c4] Running
	I0919 19:12:31.163472  353502 system_pods.go:89] "kube-apiserver-ha-310211" [c51c3488-bc7a-43bd-8294-07037340e044] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 19:12:31.163484  353502 system_pods.go:89] "kube-apiserver-ha-310211-m02" [aece3230-105d-49b8-aa76-d64f2dc051f3] Running
	I0919 19:12:31.163491  353502 system_pods.go:89] "kube-apiserver-ha-310211-m03" [103d9f6c-6c16-4a0d-886b-51d0888ab60f] Running
	I0919 19:12:31.163509  353502 system_pods.go:89] "kube-controller-manager-ha-310211" [0ab8b484-5569-401b-b8f0-14b26ff2161f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 19:12:31.163521  353502 system_pods.go:89] "kube-controller-manager-ha-310211-m02" [74cc9912-68c6-4699-8779-4047c54a9e96] Running
	I0919 19:12:31.163527  353502 system_pods.go:89] "kube-controller-manager-ha-310211-m03" [12220c38-1c8a-4970-acce-8b4513f8a47a] Running
	I0919 19:12:31.163534  353502 system_pods.go:89] "kube-proxy-9jg6c" [001dc3f6-3ef5-425b-a0c3-9ff6b8d3aeff] Running
	I0919 19:12:31.163539  353502 system_pods.go:89] "kube-proxy-f2xdc" [2c7ec6d2-ee1a-43bd-9db9-a1a9741deaae] Running
	I0919 19:12:31.163546  353502 system_pods.go:89] "kube-proxy-lbfq4" [d612bab1-080f-4146-ba26-9c5aaa602a98] Running
	I0919 19:12:31.163550  353502 system_pods.go:89] "kube-proxy-vsrc4" [16d78326-3695-4224-a000-bb79087903ce] Running
	I0919 19:12:31.163554  353502 system_pods.go:89] "kube-scheduler-ha-310211" [95e09d69-6245-43f1-9738-37cd7e439fb7] Running
	I0919 19:12:31.163558  353502 system_pods.go:89] "kube-scheduler-ha-310211-m02" [cb90e3ae-283d-442a-9d1f-f8582d2872ec] Running
	I0919 19:12:31.163565  353502 system_pods.go:89] "kube-scheduler-ha-310211-m03" [bb3d3bcd-5b74-4bed-a119-4fc0b683d45d] Running
	I0919 19:12:31.163574  353502 system_pods.go:89] "kube-vip-ha-310211" [c2d094a2-aa9c-4a71-9cfb-3dcc17812d7a] Running
	I0919 19:12:31.163579  353502 system_pods.go:89] "kube-vip-ha-310211-m02" [4a59a1a3-a4f3-41a2-ac7e-6bdc9b875b70] Running
	I0919 19:12:31.163583  353502 system_pods.go:89] "kube-vip-ha-310211-m03" [2a2f004f-f4f4-4fa6-be23-db7f7b8a0109] Running
	I0919 19:12:31.163587  353502 system_pods.go:89] "storage-provisioner" [0658ad3b-68f1-4a20-b125-5d4f759ab3e4] Running
	I0919 19:12:31.163594  353502 system_pods.go:126] duration metric: took 15.722455ms to wait for k8s-apps to be running ...
	I0919 19:12:31.163614  353502 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 19:12:31.163679  353502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 19:12:31.175870  353502 system_svc.go:56] duration metric: took 12.254133ms WaitForService to wait for kubelet
	I0919 19:12:31.175919  353502 kubeadm.go:582] duration metric: took 1m13.440978629s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 19:12:31.175945  353502 node_conditions.go:102] verifying NodePressure condition ...
	I0919 19:12:31.176028  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0919 19:12:31.176042  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:31.176051  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:31.176061  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:31.179416  353502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:12:31.180768  353502 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0919 19:12:31.180805  353502 node_conditions.go:123] node cpu capacity is 2
	I0919 19:12:31.180819  353502 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0919 19:12:31.180824  353502 node_conditions.go:123] node cpu capacity is 2
	I0919 19:12:31.180828  353502 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0919 19:12:31.180832  353502 node_conditions.go:123] node cpu capacity is 2
	I0919 19:12:31.180837  353502 node_conditions.go:105] duration metric: took 4.885801ms to run NodePressure ...
	I0919 19:12:31.180850  353502 start.go:241] waiting for startup goroutines ...
	I0919 19:12:31.180874  353502 start.go:255] writing updated cluster config ...
	I0919 19:12:31.183353  353502 out.go:201] 
	I0919 19:12:31.185483  353502 config.go:182] Loaded profile config "ha-310211": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:12:31.185621  353502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211/config.json ...
	I0919 19:12:31.187953  353502 out.go:177] * Starting "ha-310211-m04" worker node in "ha-310211" cluster
	I0919 19:12:31.190300  353502 cache.go:121] Beginning downloading kic base image for docker with crio
	I0919 19:12:31.192072  353502 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0919 19:12:31.194254  353502 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0919 19:12:31.194241  353502 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 19:12:31.194364  353502 cache.go:56] Caching tarball of preloaded images
	I0919 19:12:31.194450  353502 preload.go:172] Found /home/jenkins/minikube-integration/19664-287261/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0919 19:12:31.194461  353502 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0919 19:12:31.194595  353502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211/config.json ...
	I0919 19:12:31.213345  353502 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon, skipping pull
	I0919 19:12:31.213371  353502 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in daemon, skipping load
	I0919 19:12:31.213386  353502 cache.go:194] Successfully downloaded all kic artifacts
	I0919 19:12:31.213410  353502 start.go:360] acquireMachinesLock for ha-310211-m04: {Name:mk097d43af38277ff2912e24f8a58f95f2963ea4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 19:12:31.213471  353502 start.go:364] duration metric: took 38.121µs to acquireMachinesLock for "ha-310211-m04"
	I0919 19:12:31.213497  353502 start.go:96] Skipping create...Using existing machine configuration
	I0919 19:12:31.213502  353502 fix.go:54] fixHost starting: m04
	I0919 19:12:31.213780  353502 cli_runner.go:164] Run: docker container inspect ha-310211-m04 --format={{.State.Status}}
	I0919 19:12:31.228545  353502 fix.go:112] recreateIfNeeded on ha-310211-m04: state=Stopped err=<nil>
	W0919 19:12:31.228574  353502 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 19:12:31.233034  353502 out.go:177] * Restarting existing docker container for "ha-310211-m04" ...
	I0919 19:12:31.235619  353502 cli_runner.go:164] Run: docker start ha-310211-m04
	I0919 19:12:31.549765  353502 cli_runner.go:164] Run: docker container inspect ha-310211-m04 --format={{.State.Status}}
	I0919 19:12:31.575610  353502 kic.go:430] container "ha-310211-m04" state is running.
	I0919 19:12:31.577433  353502 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-310211-m04
	I0919 19:12:31.608241  353502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211/config.json ...
	I0919 19:12:31.608511  353502 machine.go:93] provisionDockerMachine start ...
	I0919 19:12:31.608579  353502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-310211-m04
	I0919 19:12:31.645057  353502 main.go:141] libmachine: Using SSH client type: native
	I0919 19:12:31.645307  353502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33203 <nil> <nil>}
	I0919 19:12:31.645322  353502 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 19:12:31.645910  353502 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0919 19:12:34.811574  353502 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-310211-m04
	
	I0919 19:12:34.811601  353502 ubuntu.go:169] provisioning hostname "ha-310211-m04"
	I0919 19:12:34.811666  353502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-310211-m04
	I0919 19:12:34.829601  353502 main.go:141] libmachine: Using SSH client type: native
	I0919 19:12:34.829857  353502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33203 <nil> <nil>}
	I0919 19:12:34.829871  353502 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-310211-m04 && echo "ha-310211-m04" | sudo tee /etc/hostname
	I0919 19:12:34.988257  353502 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-310211-m04
	
	I0919 19:12:34.988351  353502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-310211-m04
	I0919 19:12:35.015310  353502 main.go:141] libmachine: Using SSH client type: native
	I0919 19:12:35.015557  353502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33203 <nil> <nil>}
	I0919 19:12:35.015576  353502 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-310211-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-310211-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-310211-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 19:12:35.164178  353502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 19:12:35.164204  353502 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19664-287261/.minikube CaCertPath:/home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19664-287261/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19664-287261/.minikube}
	I0919 19:12:35.164220  353502 ubuntu.go:177] setting up certificates
	I0919 19:12:35.164229  353502 provision.go:84] configureAuth start
	I0919 19:12:35.164293  353502 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-310211-m04
	I0919 19:12:35.196267  353502 provision.go:143] copyHostCerts
	I0919 19:12:35.196332  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19664-287261/.minikube/ca.pem
	I0919 19:12:35.196372  353502 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-287261/.minikube/ca.pem, removing ...
	I0919 19:12:35.196386  353502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-287261/.minikube/ca.pem
	I0919 19:12:35.196462  353502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19664-287261/.minikube/ca.pem (1082 bytes)
	I0919 19:12:35.196570  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19664-287261/.minikube/cert.pem
	I0919 19:12:35.196592  353502 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-287261/.minikube/cert.pem, removing ...
	I0919 19:12:35.196598  353502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-287261/.minikube/cert.pem
	I0919 19:12:35.196632  353502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19664-287261/.minikube/cert.pem (1123 bytes)
	I0919 19:12:35.196707  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19664-287261/.minikube/key.pem
	I0919 19:12:35.196728  353502 exec_runner.go:144] found /home/jenkins/minikube-integration/19664-287261/.minikube/key.pem, removing ...
	I0919 19:12:35.196737  353502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19664-287261/.minikube/key.pem
	I0919 19:12:35.196765  353502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19664-287261/.minikube/key.pem (1675 bytes)
	I0919 19:12:35.196827  353502 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19664-287261/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca-key.pem org=jenkins.ha-310211-m04 san=[127.0.0.1 192.168.49.5 ha-310211-m04 localhost minikube]
	I0919 19:12:35.987013  353502 provision.go:177] copyRemoteCerts
	I0919 19:12:35.987089  353502 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 19:12:35.987132  353502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-310211-m04
	I0919 19:12:36.022047  353502 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/ha-310211-m04/id_rsa Username:docker}
	I0919 19:12:36.137411  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 19:12:36.137503  353502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 19:12:36.168471  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 19:12:36.168539  353502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 19:12:36.197392  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 19:12:36.197457  353502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 19:12:36.234582  353502 provision.go:87] duration metric: took 1.070339513s to configureAuth
	I0919 19:12:36.234612  353502 ubuntu.go:193] setting minikube options for container-runtime
	I0919 19:12:36.234861  353502 config.go:182] Loaded profile config "ha-310211": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:12:36.234974  353502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-310211-m04
	I0919 19:12:36.253423  353502 main.go:141] libmachine: Using SSH client type: native
	I0919 19:12:36.253665  353502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33203 <nil> <nil>}
	I0919 19:12:36.253686  353502 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 19:12:36.554802  353502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 19:12:36.554839  353502 machine.go:96] duration metric: took 4.946310062s to provisionDockerMachine
	I0919 19:12:36.554852  353502 start.go:293] postStartSetup for "ha-310211-m04" (driver="docker")
	I0919 19:12:36.554863  353502 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 19:12:36.554925  353502 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 19:12:36.554973  353502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-310211-m04
	I0919 19:12:36.578351  353502 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/ha-310211-m04/id_rsa Username:docker}
	I0919 19:12:36.685767  353502 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 19:12:36.689475  353502 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 19:12:36.689509  353502 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 19:12:36.689519  353502 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 19:12:36.689526  353502 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 19:12:36.689536  353502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-287261/.minikube/addons for local assets ...
	I0919 19:12:36.689602  353502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-287261/.minikube/files for local assets ...
	I0919 19:12:36.689679  353502 filesync.go:149] local asset: /home/jenkins/minikube-integration/19664-287261/.minikube/files/etc/ssl/certs/2926662.pem -> 2926662.pem in /etc/ssl/certs
	I0919 19:12:36.689686  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/files/etc/ssl/certs/2926662.pem -> /etc/ssl/certs/2926662.pem
	I0919 19:12:36.689790  353502 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 19:12:36.698494  353502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/files/etc/ssl/certs/2926662.pem --> /etc/ssl/certs/2926662.pem (1708 bytes)
	I0919 19:12:36.727463  353502 start.go:296] duration metric: took 172.593672ms for postStartSetup
	I0919 19:12:36.727555  353502 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 19:12:36.727606  353502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-310211-m04
	I0919 19:12:36.747086  353502 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/ha-310211-m04/id_rsa Username:docker}
	I0919 19:12:36.845170  353502 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 19:12:36.850653  353502 fix.go:56] duration metric: took 5.637142179s for fixHost
	I0919 19:12:36.850679  353502 start.go:83] releasing machines lock for "ha-310211-m04", held for 5.637193928s
	I0919 19:12:36.850748  353502 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-310211-m04
	I0919 19:12:36.869659  353502 out.go:177] * Found network options:
	I0919 19:12:36.871793  353502 out.go:177]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0919 19:12:36.873815  353502 proxy.go:119] fail to check proxy env: Error ip not in block
	W0919 19:12:36.873851  353502 proxy.go:119] fail to check proxy env: Error ip not in block
	W0919 19:12:36.873876  353502 proxy.go:119] fail to check proxy env: Error ip not in block
	W0919 19:12:36.873889  353502 proxy.go:119] fail to check proxy env: Error ip not in block
	I0919 19:12:36.873970  353502 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 19:12:36.874016  353502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-310211-m04
	I0919 19:12:36.874305  353502 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 19:12:36.874369  353502 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-310211-m04
	I0919 19:12:36.903128  353502 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/ha-310211-m04/id_rsa Username:docker}
	I0919 19:12:36.920582  353502 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33203 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/ha-310211-m04/id_rsa Username:docker}
	I0919 19:12:37.190456  353502 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 19:12:37.195047  353502 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 19:12:37.210516  353502 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0919 19:12:37.210598  353502 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 19:12:37.223724  353502 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 19:12:37.223750  353502 start.go:495] detecting cgroup driver to use...
	I0919 19:12:37.223782  353502 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0919 19:12:37.223830  353502 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 19:12:37.242789  353502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 19:12:37.256422  353502 docker.go:217] disabling cri-docker service (if available) ...
	I0919 19:12:37.256487  353502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 19:12:37.271206  353502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 19:12:37.285065  353502 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 19:12:37.383588  353502 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 19:12:37.498687  353502 docker.go:233] disabling docker service ...
	I0919 19:12:37.498808  353502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 19:12:37.512938  353502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 19:12:37.527732  353502 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 19:12:37.633841  353502 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 19:12:37.741722  353502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 19:12:37.757158  353502 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 19:12:37.775030  353502 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0919 19:12:37.775158  353502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:12:37.786247  353502 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 19:12:37.786320  353502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:12:37.798689  353502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:12:37.809989  353502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:12:37.821049  353502 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 19:12:37.830911  353502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:12:37.842070  353502 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:12:37.852455  353502 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 19:12:37.863841  353502 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 19:12:37.873353  353502 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 19:12:37.882484  353502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 19:12:37.981038  353502 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 19:12:38.127135  353502 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 19:12:38.127253  353502 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 19:12:38.132057  353502 start.go:563] Will wait 60s for crictl version
	I0919 19:12:38.132307  353502 ssh_runner.go:195] Run: which crictl
	I0919 19:12:38.135923  353502 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 19:12:38.183039  353502 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0919 19:12:38.183179  353502 ssh_runner.go:195] Run: crio --version
	I0919 19:12:38.225940  353502 ssh_runner.go:195] Run: crio --version
	I0919 19:12:38.276824  353502 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0919 19:12:38.278929  353502 out.go:177]   - env NO_PROXY=192.168.49.2
	I0919 19:12:38.281054  353502 out.go:177]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0919 19:12:38.283471  353502 cli_runner.go:164] Run: docker network inspect ha-310211 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 19:12:38.300197  353502 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 19:12:38.304357  353502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 19:12:38.315650  353502 mustload.go:65] Loading cluster: ha-310211
	I0919 19:12:38.315911  353502 config.go:182] Loaded profile config "ha-310211": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:12:38.316222  353502 cli_runner.go:164] Run: docker container inspect ha-310211 --format={{.State.Status}}
	I0919 19:12:38.338019  353502 host.go:66] Checking if "ha-310211" exists ...
	I0919 19:12:38.338309  353502 certs.go:68] Setting up /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211 for IP: 192.168.49.5
	I0919 19:12:38.338326  353502 certs.go:194] generating shared ca certs ...
	I0919 19:12:38.338385  353502 certs.go:226] acquiring lock for ca certs: {Name:mk523f1ff29ba1b125a662d8a16466e488af99fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 19:12:38.338558  353502 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19664-287261/.minikube/ca.key
	I0919 19:12:38.338610  353502 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19664-287261/.minikube/proxy-client-ca.key
	I0919 19:12:38.338624  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 19:12:38.338637  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 19:12:38.338649  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 19:12:38.338664  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 19:12:38.338721  353502 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/292666.pem (1338 bytes)
	W0919 19:12:38.338754  353502 certs.go:480] ignoring /home/jenkins/minikube-integration/19664-287261/.minikube/certs/292666_empty.pem, impossibly tiny 0 bytes
	I0919 19:12:38.338766  353502 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 19:12:38.338790  353502 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/ca.pem (1082 bytes)
	I0919 19:12:38.338815  353502 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/cert.pem (1123 bytes)
	I0919 19:12:38.338837  353502 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/key.pem (1675 bytes)
	I0919 19:12:38.338885  353502 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-287261/.minikube/files/etc/ssl/certs/2926662.pem (1708 bytes)
	I0919 19:12:38.338918  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/files/etc/ssl/certs/2926662.pem -> /usr/share/ca-certificates/2926662.pem
	I0919 19:12:38.338931  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:12:38.338942  353502 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19664-287261/.minikube/certs/292666.pem -> /usr/share/ca-certificates/292666.pem
	I0919 19:12:38.338959  353502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 19:12:38.366984  353502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0919 19:12:38.399256  353502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 19:12:38.426395  353502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 19:12:38.454006  353502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/files/etc/ssl/certs/2926662.pem --> /usr/share/ca-certificates/2926662.pem (1708 bytes)
	I0919 19:12:38.481200  353502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 19:12:38.506990  353502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-287261/.minikube/certs/292666.pem --> /usr/share/ca-certificates/292666.pem (1338 bytes)
	I0919 19:12:38.533522  353502 ssh_runner.go:195] Run: openssl version
	I0919 19:12:38.539094  353502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 19:12:38.549110  353502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:12:38.553455  353502 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:12:38.553560  353502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 19:12:38.560763  353502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 19:12:38.570221  353502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/292666.pem && ln -fs /usr/share/ca-certificates/292666.pem /etc/ssl/certs/292666.pem"
	I0919 19:12:38.580339  353502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/292666.pem
	I0919 19:12:38.584067  353502 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 18:58 /usr/share/ca-certificates/292666.pem
	I0919 19:12:38.584254  353502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/292666.pem
	I0919 19:12:38.591783  353502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/292666.pem /etc/ssl/certs/51391683.0"
	I0919 19:12:38.601880  353502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2926662.pem && ln -fs /usr/share/ca-certificates/2926662.pem /etc/ssl/certs/2926662.pem"
	I0919 19:12:38.612673  353502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2926662.pem
	I0919 19:12:38.616493  353502 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 18:58 /usr/share/ca-certificates/2926662.pem
	I0919 19:12:38.616589  353502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2926662.pem
	I0919 19:12:38.623817  353502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2926662.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 19:12:38.633197  353502 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 19:12:38.636847  353502 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 19:12:38.636896  353502 kubeadm.go:934] updating node {m04 192.168.49.5 0 v1.31.1  false true} ...
	I0919 19:12:38.637020  353502 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-310211-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-310211 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 19:12:38.637093  353502 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0919 19:12:38.646196  353502 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 19:12:38.646282  353502 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0919 19:12:38.657080  353502 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0919 19:12:38.677170  353502 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 19:12:38.696516  353502 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 19:12:38.700197  353502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 19:12:38.711439  353502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 19:12:38.815596  353502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 19:12:38.828217  353502 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0919 19:12:38.828745  353502 config.go:182] Loaded profile config "ha-310211": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:12:38.830959  353502 out.go:177] * Verifying Kubernetes components...
	I0919 19:12:38.832755  353502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 19:12:38.934216  353502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 19:12:38.947356  353502 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19664-287261/kubeconfig
	I0919 19:12:38.947641  353502 kapi.go:59] client config for ha-310211: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211/client.crt", KeyFile:"/home/jenkins/minikube-integration/19664-287261/.minikube/profiles/ha-310211/client.key", CAFile:"/home/jenkins/minikube-integration/19664-287261/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1a1e6c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 19:12:38.947707  353502 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 19:12:38.949100  353502 node_ready.go:35] waiting up to 6m0s for node "ha-310211-m04" to be "Ready" ...
	I0919 19:12:38.949199  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211-m04
	I0919 19:12:38.949211  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:38.949220  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:38.949226  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:38.953245  353502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:12:38.953878  353502 node_ready.go:49] node "ha-310211-m04" has status "Ready":"True"
	I0919 19:12:38.953901  353502 node_ready.go:38] duration metric: took 4.773998ms for node "ha-310211-m04" to be "Ready" ...
	I0919 19:12:38.953912  353502 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 19:12:38.953982  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0919 19:12:38.953994  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:38.954002  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:38.954008  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:38.961367  353502 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0919 19:12:38.971616  353502 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-drds6" in "kube-system" namespace to be "Ready" ...
	I0919 19:12:38.971732  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-drds6
	I0919 19:12:38.971745  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:38.971754  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:38.971758  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:38.982031  353502 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0919 19:12:38.983250  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:12:38.983275  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:38.983285  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:38.983291  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:38.994615  353502 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0919 19:12:39.472188  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-drds6
	I0919 19:12:39.472262  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:39.472289  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:39.472314  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:39.475692  353502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:12:39.476757  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:12:39.476779  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:39.476789  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:39.476795  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:39.479354  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:12:39.972664  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-drds6
	I0919 19:12:39.972687  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:39.972697  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:39.972703  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:39.975478  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:12:39.976789  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:12:39.976820  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:39.976830  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:39.976837  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:39.979664  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:12:40.472857  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-drds6
	I0919 19:12:40.472893  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:40.472903  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:40.472907  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:40.479362  353502 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0919 19:12:40.481230  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:12:40.481260  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:40.481274  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:40.481285  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:40.484824  353502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:12:40.972710  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-drds6
	I0919 19:12:40.972757  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:40.972768  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:40.972774  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:40.977818  353502 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0919 19:12:40.979258  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:12:40.979341  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:40.979366  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:40.979413  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:40.983719  353502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:12:40.985173  353502 pod_ready.go:103] pod "coredns-7c65d6cfc9-drds6" in "kube-system" namespace has status "Ready":"False"
	I0919 19:12:41.473165  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-drds6
	I0919 19:12:41.473243  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:41.473267  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:41.473287  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:41.489809  353502 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0919 19:12:41.491251  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:12:41.491319  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:41.491358  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:41.491384  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:41.494554  353502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:12:41.972196  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-drds6
	I0919 19:12:41.972268  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:41.972292  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:41.972317  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:41.978152  353502 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0919 19:12:41.979043  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:12:41.979100  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:41.979124  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:41.979149  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:41.982155  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:12:42.472370  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-drds6
	I0919 19:12:42.472396  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:42.472406  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:42.472411  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:42.475815  353502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:12:42.477321  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:12:42.477343  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:42.477352  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:42.477358  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:42.479998  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:12:42.971858  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-drds6
	I0919 19:12:42.971883  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:42.971893  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:42.971898  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:42.974867  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:12:42.975617  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:12:42.975638  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:42.975648  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:42.975653  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:42.978344  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:12:43.472838  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-drds6
	I0919 19:12:43.472911  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:43.472936  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:43.472959  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:43.475879  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:12:43.476581  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:12:43.476599  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:43.476608  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:43.476613  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:43.480027  353502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:12:43.480898  353502 pod_ready.go:103] pod "coredns-7c65d6cfc9-drds6" in "kube-system" namespace has status "Ready":"False"
	I0919 19:12:43.972739  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-drds6
	I0919 19:12:43.972765  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:43.972774  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:43.972780  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:43.975622  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:12:43.976675  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:12:43.976700  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:43.976709  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:43.976713  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:43.979287  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:12:44.472253  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-drds6
	I0919 19:12:44.472278  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:44.472288  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:44.472294  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:44.475264  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:12:44.476281  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:12:44.476301  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:44.476313  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:44.476319  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:44.479314  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:12:44.971902  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-drds6
	I0919 19:12:44.971927  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:44.971937  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:44.971943  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:44.975359  353502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:12:44.976442  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:12:44.976464  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:44.976473  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:44.976478  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:44.979446  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:12:45.472442  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-drds6
	I0919 19:12:45.472464  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:45.472474  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:45.472478  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:45.476476  353502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:12:45.477467  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:12:45.477488  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:45.477497  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:45.477502  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:45.480188  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:12:45.481111  353502 pod_ready.go:103] pod "coredns-7c65d6cfc9-drds6" in "kube-system" namespace has status "Ready":"False"
	I0919 19:12:45.972367  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-drds6
	I0919 19:12:45.972392  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:45.972402  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:45.972409  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:45.976286  353502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:12:45.977195  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:12:45.977220  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:45.977231  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:45.977267  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:45.979868  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:12:46.471813  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-drds6
	I0919 19:12:46.471837  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:46.471847  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:46.471852  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:46.474881  353502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:12:46.475739  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:12:46.475763  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:46.475772  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:46.475777  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:46.478408  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:12:46.972243  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-drds6
	I0919 19:12:46.972264  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:46.972274  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:46.972279  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:46.976661  353502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:12:46.977739  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:12:46.977758  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:46.977768  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:46.977773  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:46.980863  353502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:12:47.472430  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-drds6
	I0919 19:12:47.472462  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:47.472472  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:47.472477  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:47.475637  353502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:12:47.476713  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:12:47.476732  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:47.476741  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:47.476745  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:47.479314  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:12:47.972027  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-drds6
	I0919 19:12:47.972067  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:47.972077  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:47.972088  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:47.976237  353502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:12:47.977194  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:12:47.977214  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:47.977224  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:47.977228  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:47.979876  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:12:47.980628  353502 pod_ready.go:103] pod "coredns-7c65d6cfc9-drds6" in "kube-system" namespace has status "Ready":"False"
	I0919 19:12:48.472270  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-drds6
	I0919 19:12:48.472294  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:48.472304  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:48.472310  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:48.475028  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:12:48.475773  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:12:48.475792  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:48.475804  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:48.475808  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:48.478464  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:12:48.972410  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-drds6
	I0919 19:12:48.972433  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:48.972443  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:48.972448  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:48.975374  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:12:48.976411  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:12:48.976434  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:48.976444  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:48.976449  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:48.979530  353502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:12:49.472493  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-drds6
	I0919 19:12:49.472522  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:49.472556  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:49.472570  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:49.475742  353502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:12:49.476455  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:12:49.476469  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:49.476478  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:49.476516  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:49.478994  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:12:49.972312  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-drds6
	I0919 19:12:49.972335  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:49.972352  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:49.972364  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:49.975282  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:12:49.976175  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:12:49.976196  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:49.976204  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:49.976209  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:49.979918  353502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:12:49.980694  353502 pod_ready.go:103] pod "coredns-7c65d6cfc9-drds6" in "kube-system" namespace has status "Ready":"False"
	I0919 19:12:50.472241  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-drds6
	I0919 19:12:50.472307  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:50.472336  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:50.472367  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:50.476620  353502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:12:50.478023  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:12:50.478091  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:50.478117  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:50.478143  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:50.481809  353502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:12:50.971852  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-drds6
	I0919 19:12:50.971901  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:50.971910  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:50.971915  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:50.975034  353502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:12:50.975954  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:12:50.975977  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:50.975987  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:50.975994  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:50.979026  353502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:12:51.472899  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-drds6
	I0919 19:12:51.472919  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:51.472926  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:51.472930  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:51.475886  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:12:51.476674  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:12:51.476694  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:51.476703  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:51.476707  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:51.479297  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:12:51.971956  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-drds6
	I0919 19:12:51.971981  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:51.971990  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:51.971994  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:51.975136  353502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:12:51.975974  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:12:51.975994  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:51.976029  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:51.976043  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:51.978954  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:12:52.472278  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-drds6
	I0919 19:12:52.472302  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:52.472312  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:52.472318  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:52.475164  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:12:52.476036  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:12:52.476056  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:52.476066  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:52.476071  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:52.478618  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:12:52.479161  353502 pod_ready.go:103] pod "coredns-7c65d6cfc9-drds6" in "kube-system" namespace has status "Ready":"False"
	I0919 19:12:52.972370  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-drds6
	I0919 19:12:52.972396  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:52.972405  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:52.972410  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:52.975181  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:12:52.975855  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:12:52.975866  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:52.975875  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:52.975880  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:52.978404  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:12:53.472337  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-drds6
	I0919 19:12:53.472364  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:53.472375  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:53.472380  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:53.475488  353502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:12:53.476251  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:12:53.476279  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:53.476290  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:53.476296  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:53.479032  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:12:53.972641  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-drds6
	I0919 19:12:53.972667  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:53.972677  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:53.972682  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:53.976028  353502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:12:53.977024  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:12:53.977045  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:53.977055  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:53.977061  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:53.980004  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:12:54.472790  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-drds6
	I0919 19:12:54.472815  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:54.472825  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:54.472830  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:54.475793  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:12:54.476510  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:12:54.476531  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:54.476541  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:54.476546  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:54.479151  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:12:54.479796  353502 pod_ready.go:103] pod "coredns-7c65d6cfc9-drds6" in "kube-system" namespace has status "Ready":"False"
	I0919 19:12:54.972644  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-drds6
	I0919 19:12:54.972667  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:54.972676  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:54.972681  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:54.975544  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:12:54.976517  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:12:54.976542  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:54.976551  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:54.976573  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:54.979179  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:12:55.471872  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-drds6
	I0919 19:12:55.471893  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:55.471915  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:55.471919  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:55.484862  353502 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0919 19:12:55.485741  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:12:55.485758  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:55.485768  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:55.485772  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:55.491496  353502 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0919 19:12:55.972443  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-drds6
	I0919 19:12:55.972470  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:55.972479  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:55.972496  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:55.975848  353502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:12:55.977048  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:12:55.977069  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:55.977079  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:55.977083  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:55.981537  353502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:12:56.472588  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-drds6
	I0919 19:12:56.472612  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:56.472622  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:56.472626  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:56.475932  353502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:12:56.476879  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:12:56.476903  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:56.476913  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:56.476919  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:56.479489  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:12:56.481026  353502 pod_ready.go:103] pod "coredns-7c65d6cfc9-drds6" in "kube-system" namespace has status "Ready":"False"
	I0919 19:12:56.972297  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-drds6
	I0919 19:12:56.972321  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:56.972331  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:56.972337  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:56.975322  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:12:56.976228  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:12:56.976247  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:56.976256  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:56.976260  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:56.978760  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:12:57.472332  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-drds6
	I0919 19:12:57.472402  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:57.472438  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:57.472465  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:57.486810  353502 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0919 19:12:57.488229  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:12:57.488251  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:57.488260  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:57.488264  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:57.492290  353502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:12:57.493229  353502 pod_ready.go:98] node "ha-310211" hosting pod "coredns-7c65d6cfc9-drds6" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-310211" has status "Ready":"Unknown"
	I0919 19:12:57.493257  353502 pod_ready.go:82] duration metric: took 18.521608083s for pod "coredns-7c65d6cfc9-drds6" in "kube-system" namespace to be "Ready" ...
	E0919 19:12:57.493268  353502 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-310211" hosting pod "coredns-7c65d6cfc9-drds6" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-310211" has status "Ready":"Unknown"
	I0919 19:12:57.493275  353502 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rcmrq" in "kube-system" namespace to be "Ready" ...
	I0919 19:12:57.493343  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-rcmrq
	I0919 19:12:57.493357  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:57.493366  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:57.493378  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:57.499981  353502 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0919 19:12:57.503591  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:12:57.503615  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:57.503624  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:57.503629  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:57.514114  353502 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0919 19:12:57.520161  353502 pod_ready.go:98] node "ha-310211" hosting pod "coredns-7c65d6cfc9-rcmrq" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-310211" has status "Ready":"Unknown"
	I0919 19:12:57.520190  353502 pod_ready.go:82] duration metric: took 26.908104ms for pod "coredns-7c65d6cfc9-rcmrq" in "kube-system" namespace to be "Ready" ...
	E0919 19:12:57.520202  353502 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-310211" hosting pod "coredns-7c65d6cfc9-rcmrq" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-310211" has status "Ready":"Unknown"
	I0919 19:12:57.520210  353502 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-310211" in "kube-system" namespace to be "Ready" ...
	I0919 19:12:57.520278  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-310211
	I0919 19:12:57.520290  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:57.520300  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:57.520313  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:57.526029  353502 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0919 19:12:57.527176  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:12:57.527209  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:57.527219  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:57.527222  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:57.538380  353502 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0919 19:12:57.539428  353502 pod_ready.go:98] node "ha-310211" hosting pod "etcd-ha-310211" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-310211" has status "Ready":"Unknown"
	I0919 19:12:57.539456  353502 pod_ready.go:82] duration metric: took 19.238808ms for pod "etcd-ha-310211" in "kube-system" namespace to be "Ready" ...
	E0919 19:12:57.539467  353502 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-310211" hosting pod "etcd-ha-310211" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-310211" has status "Ready":"Unknown"
	I0919 19:12:57.539474  353502 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-310211-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:12:57.539546  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-310211-m02
	I0919 19:12:57.539557  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:57.539565  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:57.539570  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:57.543049  353502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:12:57.551169  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211-m02
	I0919 19:12:57.551193  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:57.551202  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:57.551206  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:57.558514  353502 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0919 19:12:57.559299  353502 pod_ready.go:93] pod "etcd-ha-310211-m02" in "kube-system" namespace has status "Ready":"True"
	I0919 19:12:57.559354  353502 pod_ready.go:82] duration metric: took 19.869594ms for pod "etcd-ha-310211-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:12:57.559381  353502 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-310211-m03" in "kube-system" namespace to be "Ready" ...
	I0919 19:12:57.559481  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-310211-m03
	I0919 19:12:57.559518  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:57.559541  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:57.559561  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:57.567070  353502 round_trippers.go:574] Response Status: 404 Not Found in 7 milliseconds
	I0919 19:12:57.567384  353502 pod_ready.go:98] error getting pod "etcd-ha-310211-m03" in "kube-system" namespace (skipping!): pods "etcd-ha-310211-m03" not found
	I0919 19:12:57.567439  353502 pod_ready.go:82] duration metric: took 8.036583ms for pod "etcd-ha-310211-m03" in "kube-system" namespace to be "Ready" ...
	E0919 19:12:57.567479  353502 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "etcd-ha-310211-m03" in "kube-system" namespace (skipping!): pods "etcd-ha-310211-m03" not found
	I0919 19:12:57.567517  353502 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-310211" in "kube-system" namespace to be "Ready" ...
	I0919 19:12:57.567610  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-310211
	I0919 19:12:57.567643  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:57.567666  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:57.567686  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:57.579960  353502 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0919 19:12:57.673009  353502 request.go:632] Waited for 92.175546ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:12:57.673120  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:12:57.673141  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:57.673216  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:57.673238  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:57.676073  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:12:57.677326  353502 pod_ready.go:98] node "ha-310211" hosting pod "kube-apiserver-ha-310211" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-310211" has status "Ready":"Unknown"
	I0919 19:12:57.677395  353502 pod_ready.go:82] duration metric: took 109.85637ms for pod "kube-apiserver-ha-310211" in "kube-system" namespace to be "Ready" ...
	E0919 19:12:57.677424  353502 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-310211" hosting pod "kube-apiserver-ha-310211" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-310211" has status "Ready":"Unknown"
	I0919 19:12:57.677445  353502 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-310211-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:12:57.872830  353502 request.go:632] Waited for 195.286445ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-310211-m02
	I0919 19:12:57.872946  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-310211-m02
	I0919 19:12:57.873016  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:57.873044  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:57.873067  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:57.875927  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:12:58.073186  353502 request.go:632] Waited for 196.185186ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-310211-m02
	I0919 19:12:58.073323  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211-m02
	I0919 19:12:58.073338  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:58.073347  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:58.073358  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:58.076512  353502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:12:58.077811  353502 pod_ready.go:93] pod "kube-apiserver-ha-310211-m02" in "kube-system" namespace has status "Ready":"True"
	I0919 19:12:58.077837  353502 pod_ready.go:82] duration metric: took 400.349266ms for pod "kube-apiserver-ha-310211-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:12:58.077849  353502 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-310211-m03" in "kube-system" namespace to be "Ready" ...
	I0919 19:12:58.273040  353502 request.go:632] Waited for 195.121799ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-310211-m03
	I0919 19:12:58.273110  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-310211-m03
	I0919 19:12:58.273117  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:58.273126  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:58.273131  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:58.284224  353502 round_trippers.go:574] Response Status: 404 Not Found in 11 milliseconds
	I0919 19:12:58.286170  353502 pod_ready.go:98] error getting pod "kube-apiserver-ha-310211-m03" in "kube-system" namespace (skipping!): pods "kube-apiserver-ha-310211-m03" not found
	I0919 19:12:58.286205  353502 pod_ready.go:82] duration metric: took 208.347372ms for pod "kube-apiserver-ha-310211-m03" in "kube-system" namespace to be "Ready" ...
	E0919 19:12:58.286218  353502 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "kube-apiserver-ha-310211-m03" in "kube-system" namespace (skipping!): pods "kube-apiserver-ha-310211-m03" not found
	I0919 19:12:58.286226  353502 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-310211" in "kube-system" namespace to be "Ready" ...
	I0919 19:12:58.472602  353502 request.go:632] Waited for 186.299132ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-310211
	I0919 19:12:58.472704  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-310211
	I0919 19:12:58.472714  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:58.472723  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:58.472729  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:58.476075  353502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:12:58.673254  353502 request.go:632] Waited for 196.342963ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:12:58.673339  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:12:58.673403  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:58.673416  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:58.673433  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:58.676285  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:12:58.676983  353502 pod_ready.go:98] node "ha-310211" hosting pod "kube-controller-manager-ha-310211" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-310211" has status "Ready":"Unknown"
	I0919 19:12:58.677009  353502 pod_ready.go:82] duration metric: took 390.775005ms for pod "kube-controller-manager-ha-310211" in "kube-system" namespace to be "Ready" ...
	E0919 19:12:58.677021  353502 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-310211" hosting pod "kube-controller-manager-ha-310211" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-310211" has status "Ready":"Unknown"
	I0919 19:12:58.677036  353502 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-310211-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:12:58.873305  353502 request.go:632] Waited for 196.187869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-310211-m02
	I0919 19:12:58.873426  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-310211-m02
	I0919 19:12:58.873474  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:58.873490  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:58.873496  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:58.876538  353502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:12:59.072659  353502 request.go:632] Waited for 195.326428ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-310211-m02
	I0919 19:12:59.072773  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211-m02
	I0919 19:12:59.072840  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:59.072867  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:59.072908  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:59.075780  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:12:59.076411  353502 pod_ready.go:93] pod "kube-controller-manager-ha-310211-m02" in "kube-system" namespace has status "Ready":"True"
	I0919 19:12:59.076433  353502 pod_ready.go:82] duration metric: took 399.376611ms for pod "kube-controller-manager-ha-310211-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:12:59.076462  353502 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-310211-m03" in "kube-system" namespace to be "Ready" ...
	I0919 19:12:59.273273  353502 request.go:632] Waited for 196.691664ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-310211-m03
	I0919 19:12:59.273355  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-310211-m03
	I0919 19:12:59.273369  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:59.273379  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:59.273383  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:59.275926  353502 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0919 19:12:59.276094  353502 pod_ready.go:98] error getting pod "kube-controller-manager-ha-310211-m03" in "kube-system" namespace (skipping!): pods "kube-controller-manager-ha-310211-m03" not found
	I0919 19:12:59.276139  353502 pod_ready.go:82] duration metric: took 199.64896ms for pod "kube-controller-manager-ha-310211-m03" in "kube-system" namespace to be "Ready" ...
	E0919 19:12:59.276157  353502 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "kube-controller-manager-ha-310211-m03" in "kube-system" namespace (skipping!): pods "kube-controller-manager-ha-310211-m03" not found
	I0919 19:12:59.276169  353502 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9jg6c" in "kube-system" namespace to be "Ready" ...
	I0919 19:12:59.472596  353502 request.go:632] Waited for 196.345423ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9jg6c
	I0919 19:12:59.472696  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9jg6c
	I0919 19:12:59.472707  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:59.472724  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:59.472734  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:59.475526  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:12:59.673189  353502 request.go:632] Waited for 196.687997ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-310211-m04
	I0919 19:12:59.673275  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211-m04
	I0919 19:12:59.673307  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:59.673324  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:59.673329  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:59.677090  353502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:12:59.677672  353502 pod_ready.go:93] pod "kube-proxy-9jg6c" in "kube-system" namespace has status "Ready":"True"
	I0919 19:12:59.677695  353502 pod_ready.go:82] duration metric: took 401.512571ms for pod "kube-proxy-9jg6c" in "kube-system" namespace to be "Ready" ...
	I0919 19:12:59.677709  353502 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f2xdc" in "kube-system" namespace to be "Ready" ...
	I0919 19:12:59.873164  353502 request.go:632] Waited for 195.338611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f2xdc
	I0919 19:12:59.873261  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f2xdc
	I0919 19:12:59.873290  353502 round_trippers.go:469] Request Headers:
	I0919 19:12:59.873309  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:12:59.873320  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:12:59.875997  353502 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0919 19:12:59.876227  353502 pod_ready.go:98] error getting pod "kube-proxy-f2xdc" in "kube-system" namespace (skipping!): pods "kube-proxy-f2xdc" not found
	I0919 19:12:59.876275  353502 pod_ready.go:82] duration metric: took 198.557759ms for pod "kube-proxy-f2xdc" in "kube-system" namespace to be "Ready" ...
	E0919 19:12:59.876292  353502 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "kube-proxy-f2xdc" in "kube-system" namespace (skipping!): pods "kube-proxy-f2xdc" not found
	I0919 19:12:59.876301  353502 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-lbfq4" in "kube-system" namespace to be "Ready" ...
	I0919 19:13:00.073225  353502 request.go:632] Waited for 196.8462ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lbfq4
	I0919 19:13:00.073432  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lbfq4
	I0919 19:13:00.073440  353502 round_trippers.go:469] Request Headers:
	I0919 19:13:00.073521  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:13:00.073530  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:13:00.157724  353502 round_trippers.go:574] Response Status: 200 OK in 84 milliseconds
	I0919 19:13:00.274345  353502 request.go:632] Waited for 106.068203ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:13:00.274412  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:13:00.274419  353502 round_trippers.go:469] Request Headers:
	I0919 19:13:00.274429  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:13:00.274435  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:13:00.294685  353502 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0919 19:13:00.296125  353502 pod_ready.go:98] node "ha-310211" hosting pod "kube-proxy-lbfq4" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-310211" has status "Ready":"Unknown"
	I0919 19:13:00.296155  353502 pod_ready.go:82] duration metric: took 419.845563ms for pod "kube-proxy-lbfq4" in "kube-system" namespace to be "Ready" ...
	E0919 19:13:00.296166  353502 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-310211" hosting pod "kube-proxy-lbfq4" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-310211" has status "Ready":"Unknown"
	I0919 19:13:00.296174  353502 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vsrc4" in "kube-system" namespace to be "Ready" ...
	I0919 19:13:00.472572  353502 request.go:632] Waited for 176.317986ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vsrc4
	I0919 19:13:00.472691  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vsrc4
	I0919 19:13:00.472705  353502 round_trippers.go:469] Request Headers:
	I0919 19:13:00.472715  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:13:00.472721  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:13:00.476250  353502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:13:00.672962  353502 request.go:632] Waited for 195.397138ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-310211-m02
	I0919 19:13:00.673028  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211-m02
	I0919 19:13:00.673038  353502 round_trippers.go:469] Request Headers:
	I0919 19:13:00.673047  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:13:00.673051  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:13:00.676933  353502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:13:00.677568  353502 pod_ready.go:93] pod "kube-proxy-vsrc4" in "kube-system" namespace has status "Ready":"True"
	I0919 19:13:00.677591  353502 pod_ready.go:82] duration metric: took 381.409556ms for pod "kube-proxy-vsrc4" in "kube-system" namespace to be "Ready" ...
	I0919 19:13:00.677604  353502 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-310211" in "kube-system" namespace to be "Ready" ...
	I0919 19:13:00.872441  353502 request.go:632] Waited for 194.735279ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-310211
	I0919 19:13:00.872501  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-310211
	I0919 19:13:00.872508  353502 round_trippers.go:469] Request Headers:
	I0919 19:13:00.872526  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:13:00.872542  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:13:00.875423  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:13:01.072415  353502 request.go:632] Waited for 196.30242ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:13:01.072520  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211
	I0919 19:13:01.072533  353502 round_trippers.go:469] Request Headers:
	I0919 19:13:01.072573  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:13:01.072580  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:13:01.075549  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:13:01.076177  353502 pod_ready.go:98] node "ha-310211" hosting pod "kube-scheduler-ha-310211" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-310211" has status "Ready":"Unknown"
	I0919 19:13:01.076200  353502 pod_ready.go:82] duration metric: took 398.588003ms for pod "kube-scheduler-ha-310211" in "kube-system" namespace to be "Ready" ...
	E0919 19:13:01.076227  353502 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-310211" hosting pod "kube-scheduler-ha-310211" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-310211" has status "Ready":"Unknown"
	I0919 19:13:01.076241  353502 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-310211-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:13:01.273043  353502 request.go:632] Waited for 196.728052ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-310211-m02
	I0919 19:13:01.273141  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-310211-m02
	I0919 19:13:01.273156  353502 round_trippers.go:469] Request Headers:
	I0919 19:13:01.273166  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:13:01.273170  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:13:01.276038  353502 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0919 19:13:01.473115  353502 request.go:632] Waited for 196.298775ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-310211-m02
	I0919 19:13:01.473179  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-310211-m02
	I0919 19:13:01.473190  353502 round_trippers.go:469] Request Headers:
	I0919 19:13:01.473199  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:13:01.473212  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:13:01.476822  353502 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0919 19:13:01.477446  353502 pod_ready.go:93] pod "kube-scheduler-ha-310211-m02" in "kube-system" namespace has status "Ready":"True"
	I0919 19:13:01.477469  353502 pod_ready.go:82] duration metric: took 401.21953ms for pod "kube-scheduler-ha-310211-m02" in "kube-system" namespace to be "Ready" ...
	I0919 19:13:01.477484  353502 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-310211-m03" in "kube-system" namespace to be "Ready" ...
	I0919 19:13:01.672878  353502 request.go:632] Waited for 195.324431ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-310211-m03
	I0919 19:13:01.673003  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-310211-m03
	I0919 19:13:01.673017  353502 round_trippers.go:469] Request Headers:
	I0919 19:13:01.673026  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:13:01.673032  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:13:01.675601  353502 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0919 19:13:01.675917  353502 pod_ready.go:98] error getting pod "kube-scheduler-ha-310211-m03" in "kube-system" namespace (skipping!): pods "kube-scheduler-ha-310211-m03" not found
	I0919 19:13:01.675950  353502 pod_ready.go:82] duration metric: took 198.45768ms for pod "kube-scheduler-ha-310211-m03" in "kube-system" namespace to be "Ready" ...
	E0919 19:13:01.675981  353502 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-ha-310211-m03" in "kube-system" namespace (skipping!): pods "kube-scheduler-ha-310211-m03" not found
	I0919 19:13:01.675992  353502 pod_ready.go:39] duration metric: took 22.722069856s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 19:13:01.676016  353502 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 19:13:01.676082  353502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 19:13:01.690496  353502 system_svc.go:56] duration metric: took 14.470259ms WaitForService to wait for kubelet
	I0919 19:13:01.690526  353502 kubeadm.go:582] duration metric: took 22.862216605s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 19:13:01.690545  353502 node_conditions.go:102] verifying NodePressure condition ...
	I0919 19:13:01.872878  353502 request.go:632] Waited for 182.237363ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0919 19:13:01.873053  353502 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0919 19:13:01.873065  353502 round_trippers.go:469] Request Headers:
	I0919 19:13:01.873074  353502 round_trippers.go:473]     Accept: application/json, */*
	I0919 19:13:01.873079  353502 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0919 19:13:01.877369  353502 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0919 19:13:01.878722  353502 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0919 19:13:01.878753  353502 node_conditions.go:123] node cpu capacity is 2
	I0919 19:13:01.878764  353502 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0919 19:13:01.878770  353502 node_conditions.go:123] node cpu capacity is 2
	I0919 19:13:01.878774  353502 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0919 19:13:01.878778  353502 node_conditions.go:123] node cpu capacity is 2
	I0919 19:13:01.878784  353502 node_conditions.go:105] duration metric: took 188.21384ms to run NodePressure ...
	I0919 19:13:01.878796  353502 start.go:241] waiting for startup goroutines ...
	I0919 19:13:01.878835  353502 start.go:255] writing updated cluster config ...
	I0919 19:13:01.879176  353502 ssh_runner.go:195] Run: rm -f paused
	I0919 19:13:01.949934  353502 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0919 19:13:01.955811  353502 out.go:177] * Done! kubectl is now configured to use "ha-310211" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 19 19:12:32 ha-310211 crio[642]: time="2024-09-19 19:12:32.956790409Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/cfd7030e12cb5973b5921c1ae7a8881462d6683b8a9204a541796cb4a7e8adec/merged/etc/group: no such file or directory"
	Sep 19 19:12:33 ha-310211 crio[642]: time="2024-09-19 19:12:33.015930003Z" level=info msg="Created container 86116c85c84ebe1e776b1b02ca4b83c0729ad0a01f06cd710cbd90a769a523f4: kube-system/storage-provisioner/storage-provisioner" id=188cbccc-3b8d-4031-ae42-711763a590eb name=/runtime.v1.RuntimeService/CreateContainer
	Sep 19 19:12:33 ha-310211 crio[642]: time="2024-09-19 19:12:33.016567788Z" level=info msg="Starting container: 86116c85c84ebe1e776b1b02ca4b83c0729ad0a01f06cd710cbd90a769a523f4" id=c900cd8e-af85-4a5d-88fa-7dbfe344cb66 name=/runtime.v1.RuntimeService/StartContainer
	Sep 19 19:12:33 ha-310211 crio[642]: time="2024-09-19 19:12:33.027324655Z" level=info msg="Started container" PID=1858 containerID=86116c85c84ebe1e776b1b02ca4b83c0729ad0a01f06cd710cbd90a769a523f4 description=kube-system/storage-provisioner/storage-provisioner id=c900cd8e-af85-4a5d-88fa-7dbfe344cb66 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0dfba64eb9821428718c8027ae353f057f5d156eb34b01468b593773b5164777
	Sep 19 19:12:39 ha-310211 crio[642]: time="2024-09-19 19:12:39.620832534Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.31.1" id=17c28087-81bd-490d-a645-1a40c55d3c8e name=/runtime.v1.ImageService/ImageStatus
	Sep 19 19:12:39 ha-310211 crio[642]: time="2024-09-19 19:12:39.621104288Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e,RepoTags:[registry.k8s.io/kube-controller-manager:v1.31.1],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1 registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849],Size_:86930758,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=17c28087-81bd-490d-a645-1a40c55d3c8e name=/runtime.v1.ImageService/ImageStatus
	Sep 19 19:12:39 ha-310211 crio[642]: time="2024-09-19 19:12:39.621849413Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.31.1" id=41ea14ea-bc4f-496d-be4d-545c9bb705c5 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 19:12:39 ha-310211 crio[642]: time="2024-09-19 19:12:39.622047937Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e,RepoTags:[registry.k8s.io/kube-controller-manager:v1.31.1],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1 registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849],Size_:86930758,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},Info:map[string]string{},}" id=41ea14ea-bc4f-496d-be4d-545c9bb705c5 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 19:12:39 ha-310211 crio[642]: time="2024-09-19 19:12:39.622745973Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-310211/kube-controller-manager" id=139debdc-f49d-47d9-b650-5c47e40e23ea name=/runtime.v1.RuntimeService/CreateContainer
	Sep 19 19:12:39 ha-310211 crio[642]: time="2024-09-19 19:12:39.622845805Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 19 19:12:39 ha-310211 crio[642]: time="2024-09-19 19:12:39.695521961Z" level=info msg="Created container 29b2e21ed4077a8f120bbe2c680a99e1b8df33f694665dffd4736a1e77cb3405: kube-system/kube-controller-manager-ha-310211/kube-controller-manager" id=139debdc-f49d-47d9-b650-5c47e40e23ea name=/runtime.v1.RuntimeService/CreateContainer
	Sep 19 19:12:39 ha-310211 crio[642]: time="2024-09-19 19:12:39.696066356Z" level=info msg="Starting container: 29b2e21ed4077a8f120bbe2c680a99e1b8df33f694665dffd4736a1e77cb3405" id=ffa4b0ac-f1e6-4e96-bc9a-7f6b4f0700bd name=/runtime.v1.RuntimeService/StartContainer
	Sep 19 19:12:39 ha-310211 crio[642]: time="2024-09-19 19:12:39.707821961Z" level=info msg="Started container" PID=1897 containerID=29b2e21ed4077a8f120bbe2c680a99e1b8df33f694665dffd4736a1e77cb3405 description=kube-system/kube-controller-manager-ha-310211/kube-controller-manager id=ffa4b0ac-f1e6-4e96-bc9a-7f6b4f0700bd name=/runtime.v1.RuntimeService/StartContainer sandboxID=1fb0bf353a2f477ae7740ea28de31014f2e73ce29e1a63c0500b7888c41d2df3
	Sep 19 19:12:40 ha-310211 crio[642]: time="2024-09-19 19:12:40.411830467Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Sep 19 19:12:40 ha-310211 crio[642]: time="2024-09-19 19:12:40.434187377Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 19 19:12:40 ha-310211 crio[642]: time="2024-09-19 19:12:40.434223562Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 19 19:12:40 ha-310211 crio[642]: time="2024-09-19 19:12:40.434239546Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Sep 19 19:12:40 ha-310211 crio[642]: time="2024-09-19 19:12:40.453070477Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 19 19:12:40 ha-310211 crio[642]: time="2024-09-19 19:12:40.453104586Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 19 19:12:40 ha-310211 crio[642]: time="2024-09-19 19:12:40.453120397Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Sep 19 19:12:40 ha-310211 crio[642]: time="2024-09-19 19:12:40.473258974Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 19 19:12:40 ha-310211 crio[642]: time="2024-09-19 19:12:40.473292081Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 19 19:12:40 ha-310211 crio[642]: time="2024-09-19 19:12:40.473307934Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Sep 19 19:12:40 ha-310211 crio[642]: time="2024-09-19 19:12:40.493134592Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 19 19:12:40 ha-310211 crio[642]: time="2024-09-19 19:12:40.493170457Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	29b2e21ed4077       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e   24 seconds ago       Running             kube-controller-manager   8                   1fb0bf353a2f4       kube-controller-manager-ha-310211
	86116c85c84eb       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   31 seconds ago       Running             storage-provisioner       5                   0dfba64eb9821       storage-provisioner
	5bae89efd9b7f       7e2a4e229620ba3a757dc3699d10e8f77c453b7ee71936521668dec51669679d   40 seconds ago       Running             kube-vip                  3                   2a07ca8b1cd38       kube-vip-ha-310211
	6229ec81b45b0       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853   44 seconds ago       Running             kube-apiserver            4                   95a7e52f2e3b7       kube-apiserver-ha-310211
	2757b96603153       279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e   About a minute ago   Exited              kube-controller-manager   7                   1fb0bf353a2f4       kube-controller-manager-ha-310211
	b38cfe91b67f6       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4   About a minute ago   Running             coredns                   2                   a9d6fc5565779       coredns-7c65d6cfc9-rcmrq
	a99d66efb0a4c       24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d   About a minute ago   Running             kube-proxy                2                   69c4ea79136b1       kube-proxy-lbfq4
	4b3b8abb9d84e       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4   About a minute ago   Running             coredns                   2                   46dcd0da45d06       coredns-7c65d6cfc9-drds6
	bce6465da3642       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   About a minute ago   Exited              storage-provisioner       4                   0dfba64eb9821       storage-provisioner
	32f5fc9ea22d6       6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51   About a minute ago   Running             kindnet-cni               2                   2d9b580938da4       kindnet-b57tk
	169bd9a3b60f6       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   About a minute ago   Running             busybox                   2                   4f2aacfcdd8fb       busybox-7dff88458-nlhw4
	b4a9ad827aeae       7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d   About a minute ago   Running             kube-scheduler            2                   f1b718e26ca62       kube-scheduler-ha-310211
	b43dee53c7c2a       d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853   About a minute ago   Exited              kube-apiserver            3                   95a7e52f2e3b7       kube-apiserver-ha-310211
	ffda86d862379       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da   About a minute ago   Running             etcd                      2                   dfdbc847e8aa2       etcd-ha-310211
	17787b410981a       7e2a4e229620ba3a757dc3699d10e8f77c453b7ee71936521668dec51669679d   About a minute ago   Exited              kube-vip                  2                   2a07ca8b1cd38       kube-vip-ha-310211
	
	
	==> coredns [4b3b8abb9d84e36ee8eed02f71ac50ad6d71ebdbf0f6ca84850ae51edd5d8135] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:36038 - 53056 "HINFO IN 7387241782657487493.7153285413637257791. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025707115s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1721285925]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (19-Sep-2024 19:12:01.951) (total time: 30001ms):
	Trace[1721285925]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (19:12:31.952)
	Trace[1721285925]: [30.001204984s] [30.001204984s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1843838661]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (19-Sep-2024 19:12:01.951) (total time: 30001ms):
	Trace[1843838661]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (19:12:31.952)
	Trace[1843838661]: [30.001067155s] [30.001067155s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1317308459]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (19-Sep-2024 19:12:01.952) (total time: 30001ms):
	Trace[1317308459]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (19:12:31.953)
	Trace[1317308459]: [30.001056857s] [30.001056857s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [b38cfe91b67f697634f9605693d10a3fcc22d6ef90167651f9897ea541017d41] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:42767 - 29499 "HINFO IN 985846391896602191.2283282952629127057. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.051222392s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[325606451]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (19-Sep-2024 19:12:01.953) (total time: 30000ms):
	Trace[325606451]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (19:12:31.954)
	Trace[325606451]: [30.000800545s] [30.000800545s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1914426053]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (19-Sep-2024 19:12:01.954) (total time: 30000ms):
	Trace[1914426053]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (19:12:31.954)
	Trace[1914426053]: [30.000536938s] [30.000536938s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1801606528]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (19-Sep-2024 19:12:01.954) (total time: 30000ms):
	Trace[1801606528]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (19:12:31.955)
	Trace[1801606528]: [30.000734189s] [30.000734189s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-310211
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-310211
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=ha-310211
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_19T19_02_18_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 19:02:16 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-310211
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 19:12:13 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 19 Sep 2024 19:11:43 +0000   Thu, 19 Sep 2024 19:12:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 19 Sep 2024 19:11:43 +0000   Thu, 19 Sep 2024 19:12:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 19 Sep 2024 19:11:43 +0000   Thu, 19 Sep 2024 19:12:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 19 Sep 2024 19:11:43 +0000   Thu, 19 Sep 2024 19:12:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-310211
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 92adfc994fec457dabc7ab59b95337c2
	  System UUID:                3e3e1866-5a1c-4511-b366-25d44c6ad0e2
	  Boot ID:                    52db61fe-4049-4d60-8bc0-73f7fa38c59e
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-nlhw4              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m27s
	  kube-system                 coredns-7c65d6cfc9-drds6             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     10m
	  kube-system                 coredns-7c65d6cfc9-rcmrq             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     10m
	  kube-system                 etcd-ha-310211                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         10m
	  kube-system                 kindnet-b57tk                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-ha-310211             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-310211    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-lbfq4                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-310211             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-310211                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   Starting                 62s                    kube-proxy       
	  Normal   Starting                 4m55s                  kube-proxy       
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node ha-310211 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 10m                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node ha-310211 status is now: NodeHasSufficientMemory
	  Normal   Starting                 10m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node ha-310211 status is now: NodeHasSufficientPID
	  Normal   Starting                 10m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     10m                    kubelet          Node ha-310211 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  10m                    kubelet          Node ha-310211 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m                    kubelet          Node ha-310211 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 10m                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   RegisteredNode           10m                    node-controller  Node ha-310211 event: Registered Node ha-310211 in Controller
	  Normal   NodeReady                10m                    kubelet          Node ha-310211 status is now: NodeReady
	  Normal   RegisteredNode           10m                    node-controller  Node ha-310211 event: Registered Node ha-310211 in Controller
	  Normal   RegisteredNode           9m5s                   node-controller  Node ha-310211 event: Registered Node ha-310211 in Controller
	  Normal   RegisteredNode           6m3s                   node-controller  Node ha-310211 event: Registered Node ha-310211 in Controller
	  Normal   NodeHasSufficientMemory  5m25s (x8 over 5m25s)  kubelet          Node ha-310211 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m25s (x8 over 5m25s)  kubelet          Node ha-310211 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m25s (x7 over 5m25s)  kubelet          Node ha-310211 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 5m25s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 5m25s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           4m49s                  node-controller  Node ha-310211 event: Registered Node ha-310211 in Controller
	  Normal   RegisteredNode           3m53s                  node-controller  Node ha-310211 event: Registered Node ha-310211 in Controller
	  Normal   RegisteredNode           3m33s                  node-controller  Node ha-310211 event: Registered Node ha-310211 in Controller
	  Normal   Starting                 119s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 119s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  119s (x8 over 119s)    kubelet          Node ha-310211 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    119s (x8 over 119s)    kubelet          Node ha-310211 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     119s (x7 over 119s)    kubelet          Node ha-310211 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           47s                    node-controller  Node ha-310211 event: Registered Node ha-310211 in Controller
	  Normal   RegisteredNode           21s                    node-controller  Node ha-310211 event: Registered Node ha-310211 in Controller
	  Normal   NodeNotReady             7s                     node-controller  Node ha-310211 status is now: NodeNotReady
	
	
	Name:               ha-310211-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-310211-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=ha-310211
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_19T19_02_44_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 19:02:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-310211-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 19:12:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 19 Sep 2024 19:11:46 +0000   Thu, 19 Sep 2024 19:06:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 19 Sep 2024 19:11:46 +0000   Thu, 19 Sep 2024 19:06:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 19 Sep 2024 19:11:46 +0000   Thu, 19 Sep 2024 19:06:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 19 Sep 2024 19:11:46 +0000   Thu, 19 Sep 2024 19:06:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-310211-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7182f95640d1436a9f04070f7e01b94b
	  System UUID:                b0fab85f-14a3-421b-9f95-e4d303c3a87c
	  Boot ID:                    52db61fe-4049-4d60-8bc0-73f7fa38c59e
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8r4j5                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m28s
	  kube-system                 etcd-ha-310211-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         10m
	  kube-system                 kindnet-vhvq2                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-ha-310211-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-310211-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-vsrc4                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-310211-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-310211-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 75s                    kube-proxy       
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   Starting                 6m9s                   kube-proxy       
	  Normal   Starting                 4m41s                  kube-proxy       
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node ha-310211-m02 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 10m                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node ha-310211-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node ha-310211-m02 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           10m                    node-controller  Node ha-310211-m02 event: Registered Node ha-310211-m02 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-310211-m02 event: Registered Node ha-310211-m02 in Controller
	  Normal   RegisteredNode           9m5s                   node-controller  Node ha-310211-m02 event: Registered Node ha-310211-m02 in Controller
	  Normal   NodeHasSufficientPID     6m35s (x7 over 6m35s)  kubelet          Node ha-310211-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    6m35s (x8 over 6m35s)  kubelet          Node ha-310211-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 6m35s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m35s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  6m35s (x8 over 6m35s)  kubelet          Node ha-310211-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeNotReady             6m18s                  node-controller  Node ha-310211-m02 status is now: NodeNotReady
	  Normal   RegisteredNode           6m3s                   node-controller  Node ha-310211-m02 event: Registered Node ha-310211-m02 in Controller
	  Normal   NodeHasNoDiskPressure    5m23s (x8 over 5m23s)  kubelet          Node ha-310211-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m23s (x7 over 5m23s)  kubelet          Node ha-310211-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  5m23s (x8 over 5m23s)  kubelet          Node ha-310211-m02 status is now: NodeHasSufficientMemory
	  Normal   Starting                 5m23s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m23s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   RegisteredNode           4m49s                  node-controller  Node ha-310211-m02 event: Registered Node ha-310211-m02 in Controller
	  Normal   RegisteredNode           3m53s                  node-controller  Node ha-310211-m02 event: Registered Node ha-310211-m02 in Controller
	  Normal   RegisteredNode           3m33s                  node-controller  Node ha-310211-m02 event: Registered Node ha-310211-m02 in Controller
	  Normal   Starting                 117s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 117s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  117s (x8 over 117s)    kubelet          Node ha-310211-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    117s (x8 over 117s)    kubelet          Node ha-310211-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     117s (x7 over 117s)    kubelet          Node ha-310211-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           47s                    node-controller  Node ha-310211-m02 event: Registered Node ha-310211-m02 in Controller
	  Normal   RegisteredNode           21s                    node-controller  Node ha-310211-m02 event: Registered Node ha-310211-m02 in Controller
	
	
	Name:               ha-310211-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-310211-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=ha-310211
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_19T19_05_06_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 19:05:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-310211-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 19:12:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 19 Sep 2024 19:12:45 +0000   Thu, 19 Sep 2024 19:09:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 19 Sep 2024 19:12:45 +0000   Thu, 19 Sep 2024 19:09:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 19 Sep 2024 19:12:45 +0000   Thu, 19 Sep 2024 19:09:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 19 Sep 2024 19:12:45 +0000   Thu, 19 Sep 2024 19:09:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-310211-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 492b7654171e4a0e98077a7fdeb39c87
	  System UUID:                b2381de9-ba79-4c58-adc9-174754ee0be3
	  Boot ID:                    52db61fe-4049-4d60-8bc0-73f7fa38c59e
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-mqwnz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m55s
	  kube-system                 kindnet-g4zg9              100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      7m59s
	  kube-system                 kube-proxy-9jg6c           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 8s                     kube-proxy       
	  Normal   Starting                 7m58s                  kube-proxy       
	  Normal   Starting                 2m59s                  kube-proxy       
	  Warning  CgroupV1                 8m                     kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 8m                     kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  7m59s (x2 over 7m59s)  kubelet          Node ha-310211-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m59s (x2 over 7m59s)  kubelet          Node ha-310211-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m59s (x2 over 7m59s)  kubelet          Node ha-310211-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m58s                  node-controller  Node ha-310211-m04 event: Registered Node ha-310211-m04 in Controller
	  Normal   RegisteredNode           7m58s                  node-controller  Node ha-310211-m04 event: Registered Node ha-310211-m04 in Controller
	  Normal   RegisteredNode           7m55s                  node-controller  Node ha-310211-m04 event: Registered Node ha-310211-m04 in Controller
	  Normal   NodeReady                7m17s                  kubelet          Node ha-310211-m04 status is now: NodeReady
	  Normal   RegisteredNode           6m3s                   node-controller  Node ha-310211-m04 event: Registered Node ha-310211-m04 in Controller
	  Normal   RegisteredNode           4m49s                  node-controller  Node ha-310211-m04 event: Registered Node ha-310211-m04 in Controller
	  Normal   NodeNotReady             4m9s                   node-controller  Node ha-310211-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m53s                  node-controller  Node ha-310211-m04 event: Registered Node ha-310211-m04 in Controller
	  Normal   RegisteredNode           3m33s                  node-controller  Node ha-310211-m04 event: Registered Node ha-310211-m04 in Controller
	  Normal   Starting                 3m19s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 3m19s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientPID     3m13s (x7 over 3m19s)  kubelet          Node ha-310211-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  3m6s (x8 over 3m19s)   kubelet          Node ha-310211-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m6s (x8 over 3m19s)   kubelet          Node ha-310211-m04 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           47s                    node-controller  Node ha-310211-m04 event: Registered Node ha-310211-m04 in Controller
	  Normal   Starting                 32s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 32s                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientPID     26s (x7 over 32s)      kubelet          Node ha-310211-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           21s                    node-controller  Node ha-310211-m04 event: Registered Node ha-310211-m04 in Controller
	  Normal   NodeHasSufficientMemory  19s (x8 over 32s)      kubelet          Node ha-310211-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19s (x8 over 32s)      kubelet          Node ha-310211-m04 status is now: NodeHasNoDiskPressure
	
	
	==> dmesg <==
	[Sep19 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014930] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.480178] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.743811] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.535974] kauditd_printk_skb: 36 callbacks suppressed
	[Sep19 17:29] hrtimer: interrupt took 7222366 ns
	[Sep19 17:52] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Sep19 19:11] IPVS: rr: TCP 192.168.49.254:8443 - no destination available
	
	
	==> etcd [ffda86d8623798eb4d602a607feaa7f803994c4d7195de08b12c08889c140b59] <==
	{"level":"info","ts":"2024-09-19T19:11:39.259932Z","caller":"traceutil/trace.go:171","msg":"trace[1240723979] range","detail":"{range_begin:/registry/pods/; range_end:/registry/pods0; }","duration":"7.677308347s","start":"2024-09-19T19:11:31.582617Z","end":"2024-09-19T19:11:39.259925Z","steps":["trace[1240723979] 'agreement among raft nodes before linearized reading'  (duration: 7.666232552s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T19:11:39.259952Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-19T19:11:31.582606Z","time spent":"7.677339001s","remote":"127.0.0.1:58892","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":0,"response size":0,"request content":"key:\"/registry/pods/\" range_end:\"/registry/pods0\" limit:10000 "}
	{"level":"warn","ts":"2024-09-19T19:11:39.260190Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-19T19:11:31.987547Z","time spent":"7.2726292s","remote":"127.0.0.1:58910","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":0,"response size":0,"request content":"key:\"/registry/serviceaccounts/\" range_end:\"/registry/serviceaccounts0\" limit:500 "}
	{"level":"warn","ts":"2024-09-19T19:11:39.249424Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.270271724s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingadmissionpolicies/\" range_end:\"/registry/validatingadmissionpolicies0\" limit:500 ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-09-19T19:11:39.260526Z","caller":"traceutil/trace.go:171","msg":"trace[318113268] range","detail":"{range_begin:/registry/validatingadmissionpolicies/; range_end:/registry/validatingadmissionpolicies0; }","duration":"7.281513968s","start":"2024-09-19T19:11:31.979002Z","end":"2024-09-19T19:11:39.260516Z","steps":["trace[318113268] 'agreement among raft nodes before linearized reading'  (duration: 7.270183124s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T19:11:39.260557Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-19T19:11:31.978964Z","time spent":"7.281581464s","remote":"127.0.0.1:59206","response type":"/etcdserverpb.KV/Range","request count":0,"request size":83,"response count":0,"response size":0,"request content":"key:\"/registry/validatingadmissionpolicies/\" range_end:\"/registry/validatingadmissionpolicies0\" limit:500 "}
	{"level":"warn","ts":"2024-09-19T19:11:39.249551Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.332728681s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumes/\" range_end:\"/registry/persistentvolumes0\" limit:500 ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-09-19T19:11:39.260587Z","caller":"traceutil/trace.go:171","msg":"trace[656826277] range","detail":"{range_begin:/registry/persistentvolumes/; range_end:/registry/persistentvolumes0; }","duration":"7.343773764s","start":"2024-09-19T19:11:31.916809Z","end":"2024-09-19T19:11:39.260582Z","steps":["trace[656826277] 'agreement among raft nodes before linearized reading'  (duration: 7.332727992s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T19:11:39.260608Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-19T19:11:31.916771Z","time spent":"7.343832357s","remote":"127.0.0.1:58844","response type":"/etcdserverpb.KV/Range","request count":0,"request size":63,"response count":0,"response size":0,"request content":"key:\"/registry/persistentvolumes/\" range_end:\"/registry/persistentvolumes0\" limit:500 "}
	{"level":"warn","ts":"2024-09-19T19:11:39.250250Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.444583067s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" limit:500 ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-09-19T19:11:39.261105Z","caller":"traceutil/trace.go:171","msg":"trace[2018260525] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions/; range_end:/registry/apiextensions.k8s.io/customresourcedefinitions0; }","duration":"7.455556256s","start":"2024-09-19T19:11:31.805539Z","end":"2024-09-19T19:11:39.261095Z","steps":["trace[2018260525] 'agreement among raft nodes before linearized reading'  (duration: 7.444582435s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T19:11:39.261142Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-19T19:11:31.805457Z","time spent":"7.455672211s","remote":"127.0.0.1:58702","response type":"/etcdserverpb.KV/Range","request count":0,"request size":121,"response count":0,"response size":0,"request content":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" limit:500 "}
	{"level":"warn","ts":"2024-09-19T19:11:39.250355Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.731472891s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/\" range_end:\"/registry/csinodes0\" limit:10000 ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-09-19T19:11:39.261234Z","caller":"traceutil/trace.go:171","msg":"trace[1985424706] range","detail":"{range_begin:/registry/csinodes/; range_end:/registry/csinodes0; }","duration":"7.742352886s","start":"2024-09-19T19:11:31.518875Z","end":"2024-09-19T19:11:39.261228Z","steps":["trace[1985424706] 'agreement among raft nodes before linearized reading'  (duration: 7.731472882s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T19:11:39.261256Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-19T19:11:31.518829Z","time spent":"7.742419865s","remote":"127.0.0.1:59098","response type":"/etcdserverpb.KV/Range","request count":0,"request size":45,"response count":0,"response size":0,"request content":"key:\"/registry/csinodes/\" range_end:\"/registry/csinodes0\" limit:10000 "}
	{"level":"warn","ts":"2024-09-19T19:11:39.251694Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.668780807s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/\" range_end:\"/registry/serviceaccounts0\" limit:10000 ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-09-19T19:11:39.261600Z","caller":"traceutil/trace.go:171","msg":"trace[1742439665] range","detail":"{range_begin:/registry/serviceaccounts/; range_end:/registry/serviceaccounts0; }","duration":"7.678684546s","start":"2024-09-19T19:11:31.582909Z","end":"2024-09-19T19:11:39.261593Z","steps":["trace[1742439665] 'agreement among raft nodes before linearized reading'  (duration: 7.668780757s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T19:11:39.261629Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-19T19:11:31.582860Z","time spent":"7.678758868s","remote":"127.0.0.1:58910","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":0,"response size":0,"request content":"key:\"/registry/serviceaccounts/\" range_end:\"/registry/serviceaccounts0\" limit:10000 "}
	{"level":"warn","ts":"2024-09-19T19:11:39.248868Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.666265798s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" limit:10000 ","response":"","error":"etcdserver: leader changed"}
	{"level":"info","ts":"2024-09-19T19:11:39.263151Z","caller":"traceutil/trace.go:171","msg":"trace[1532670882] range","detail":"{range_begin:/registry/horizontalpodautoscalers/; range_end:/registry/horizontalpodautoscalers0; }","duration":"7.680542904s","start":"2024-09-19T19:11:31.582598Z","end":"2024-09-19T19:11:39.263140Z","steps":["trace[1532670882] 'agreement among raft nodes before linearized reading'  (duration: 7.666265601s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T19:11:39.263195Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-19T19:11:31.582571Z","time spent":"7.680604459s","remote":"127.0.0.1:58930","response type":"/etcdserverpb.KV/Range","request count":0,"request size":77,"response count":0,"response size":0,"request content":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" limit:10000 "}
	{"level":"info","ts":"2024-09-19T19:11:39.265468Z","caller":"etcdserver/v3_server.go:912","msg":"first commit in current term: resending ReadIndex request"}
	{"level":"warn","ts":"2024-09-19T19:11:39.274299Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"775.816631ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" limit:10000 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-19T19:11:39.274366Z","caller":"traceutil/trace.go:171","msg":"trace[422815612] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions/; range_end:/registry/apiextensions.k8s.io/customresourcedefinitions0; response_count:0; response_revision:2637; }","duration":"775.903024ms","start":"2024-09-19T19:11:38.498450Z","end":"2024-09-19T19:11:39.274353Z","steps":["trace[422815612] 'agreement among raft nodes before linearized reading'  (duration: 768.590805ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T19:11:39.274400Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-19T19:11:38.498413Z","time spent":"775.974663ms","remote":"127.0.0.1:58702","response type":"/etcdserverpb.KV/Range","request count":0,"request size":121,"response count":0,"response size":29,"request content":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" limit:10000 "}
	
	
	==> kernel <==
	 19:13:05 up  2:55,  0 users,  load average: 1.84, 2.29, 1.80
	Linux ha-310211 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [32f5fc9ea22d6bc496bbff9686bfea02c2d5c192ec2a97c544dcadd787eea245] <==
	Trace[1864517427]: [30.001687604s] [30.001687604s] END
	E0919 19:12:30.404643       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0919 19:12:32.003163       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0919 19:12:32.003212       1 metrics.go:61] Registering metrics
	I0919 19:12:32.003277       1 controller.go:374] Syncing nftables rules
	I0919 19:12:40.410649       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0919 19:12:40.410794       1 main.go:322] Node ha-310211-m02 has CIDR [10.244.1.0/24] 
	I0919 19:12:40.410968       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.49.3 Flags: [] Table: 0} 
	I0919 19:12:40.411079       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0919 19:12:40.411119       1 main.go:322] Node ha-310211-m04 has CIDR [10.244.3.0/24] 
	I0919 19:12:40.411194       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.49.5 Flags: [] Table: 0} 
	I0919 19:12:40.411268       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 19:12:40.411306       1 main.go:299] handling current node
	I0919 19:12:50.405543       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 19:12:50.405584       1 main.go:299] handling current node
	I0919 19:12:50.405600       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0919 19:12:50.405606       1 main.go:322] Node ha-310211-m02 has CIDR [10.244.1.0/24] 
	I0919 19:12:50.405714       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0919 19:12:50.405727       1 main.go:322] Node ha-310211-m04 has CIDR [10.244.3.0/24] 
	I0919 19:13:00.402491       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0919 19:13:00.402556       1 main.go:322] Node ha-310211-m02 has CIDR [10.244.1.0/24] 
	I0919 19:13:00.402742       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0919 19:13:00.402754       1 main.go:322] Node ha-310211-m04 has CIDR [10.244.3.0/24] 
	I0919 19:13:00.402809       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 19:13:00.402816       1 main.go:299] handling current node
	
	
	==> kube-apiserver [6229ec81b45b0d204a149c3e21a75e5350360d7f1eeac4300fb0c0f0ecbbd159] <==
	I0919 19:12:23.099652       1 establishing_controller.go:81] Starting EstablishingController
	I0919 19:12:23.099670       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I0919 19:12:23.099681       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0919 19:12:23.099692       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0919 19:12:23.205084       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0919 19:12:23.212532       1 aggregator.go:171] initial CRD sync complete...
	I0919 19:12:23.212625       1 autoregister_controller.go:144] Starting autoregister controller
	I0919 19:12:23.212674       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0919 19:12:23.212829       1 cache.go:39] Caches are synced for autoregister controller
	I0919 19:12:23.261076       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0919 19:12:23.278474       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0919 19:12:23.278584       1 policy_source.go:224] refreshing policies
	I0919 19:12:23.278683       1 shared_informer.go:320] Caches are synced for configmaps
	I0919 19:12:23.278774       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0919 19:12:23.278984       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0919 19:12:23.279001       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0919 19:12:23.279183       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0919 19:12:23.279827       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0919 19:12:23.285267       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0919 19:12:23.286469       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0919 19:12:23.325307       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0919 19:12:23.785376       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0919 19:12:24.505018       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I0919 19:12:24.506914       1 controller.go:615] quota admission added evaluator for: endpoints
	I0919 19:12:24.520402       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [b43dee53c7c2a92ffca020512f0333c66528d678e522728758294bf2c72b95b3] <==
	W0919 19:11:39.273248       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: etcdserver: leader changed
	E0919 19:11:39.273267       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: etcdserver: leader changed" logger="UnhandledError"
	I0919 19:11:39.859618       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0919 19:11:40.654638       1 shared_informer.go:320] Caches are synced for configmaps
	I0919 19:11:41.259785       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0919 19:11:41.259816       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0919 19:11:41.555104       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0919 19:11:41.560659       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0919 19:11:41.562341       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0919 19:11:41.677709       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0919 19:11:41.683865       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0919 19:11:41.683958       1 policy_source.go:224] refreshing policies
	I0919 19:11:42.055411       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0919 19:11:42.278816       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0919 19:11:42.278958       1 aggregator.go:171] initial CRD sync complete...
	I0919 19:11:42.278993       1 autoregister_controller.go:144] Starting autoregister controller
	I0919 19:11:42.279027       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0919 19:11:42.279061       1 cache.go:39] Caches are synced for autoregister controller
	I0919 19:11:42.428936       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0919 19:11:42.459812       1 cache.go:39] Caches are synced for RemoteAvailability controller
	W0919 19:11:42.479761       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I0919 19:11:42.482154       1 controller.go:615] quota admission added evaluator for: endpoints
	I0919 19:11:42.559514       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0919 19:11:42.588027       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	F0919 19:12:18.854305       1 hooks.go:210] PostStartHook "start-service-ip-repair-controllers" failed: unable to perform initial IP and Port allocation check
	
	
	==> kube-controller-manager [2757b9660315376694b6d58bcccddc152c9f12f1d1942d1051982d87ad1f6b13] <==
	I0919 19:12:03.885162       1 serving.go:386] Generated self-signed cert in-memory
	I0919 19:12:04.459626       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0919 19:12:04.459656       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 19:12:04.461256       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0919 19:12:04.461455       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0919 19:12:04.461564       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0919 19:12:04.461612       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0919 19:12:14.480491       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld\\n[+]poststarthook/rbac/bootstrap-roles ok\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [29b2e21ed4077a8f120bbe2c680a99e1b8df33f694665dffd4736a1e77cb3405] <==
	I0919 19:12:43.316240       1 shared_informer.go:320] Caches are synced for deployment
	I0919 19:12:43.316206       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0919 19:12:43.316224       1 shared_informer.go:320] Caches are synced for PVC protection
	I0919 19:12:43.316861       1 shared_informer.go:320] Caches are synced for ephemeral
	I0919 19:12:43.320208       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0919 19:12:43.323372       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0919 19:12:43.323445       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-310211-m04"
	I0919 19:12:43.456879       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0919 19:12:43.470484       1 shared_informer.go:320] Caches are synced for resource quota
	I0919 19:12:43.482946       1 shared_informer.go:320] Caches are synced for resource quota
	I0919 19:12:43.916615       1 shared_informer.go:320] Caches are synced for garbage collector
	I0919 19:12:43.916722       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0919 19:12:43.926823       1 shared_informer.go:320] Caches are synced for garbage collector
	I0919 19:12:45.304231       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-310211-m04"
	I0919 19:12:55.420718       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="23.106162ms"
	I0919 19:12:55.420923       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="66.978µs"
	I0919 19:12:56.576938       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="22.350968ms"
	I0919 19:12:56.577060       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="69.966µs"
	I0919 19:12:57.252300       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-310211-m04"
	I0919 19:12:57.252485       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-310211"
	I0919 19:12:57.278390       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-310211"
	I0919 19:12:57.354986       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="15.247097ms"
	I0919 19:12:57.355899       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="37.678µs"
	I0919 19:12:58.286641       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-310211"
	I0919 19:13:02.576721       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-310211"
	
	
	==> kube-proxy [a99d66efb0a4c68b903726fcbd37bdb3ae3650674257cbb72f26cc5f04eef37f] <==
	I0919 19:12:02.009040       1 server_linux.go:66] "Using iptables proxy"
	I0919 19:12:02.107866       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0919 19:12:02.107960       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 19:12:02.127379       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 19:12:02.127439       1 server_linux.go:169] "Using iptables Proxier"
	I0919 19:12:02.129574       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 19:12:02.130074       1 server.go:483] "Version info" version="v1.31.1"
	I0919 19:12:02.130101       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 19:12:02.131986       1 config.go:199] "Starting service config controller"
	I0919 19:12:02.132019       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0919 19:12:02.132047       1 config.go:105] "Starting endpoint slice config controller"
	I0919 19:12:02.132052       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0919 19:12:02.132591       1 config.go:328] "Starting node config controller"
	I0919 19:12:02.132611       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0919 19:12:02.232776       1 shared_informer.go:320] Caches are synced for node config
	I0919 19:12:02.232811       1 shared_informer.go:320] Caches are synced for service config
	I0919 19:12:02.232840       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [b4a9ad827aeae834b5fb2ec2683a1fa519d5615604e4d01e21a59ac29c3f2642] <==
	E0919 19:11:38.614866       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0919 19:11:38.718981       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0919 19:11:38.719059       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0919 19:11:38.894998       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0919 19:11:38.895048       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0919 19:11:40.536067       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0919 19:11:40.536201       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 19:11:40.653583       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0919 19:11:40.653642       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0919 19:11:51.993854       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0919 19:12:23.160256       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.2:58784->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0919 19:12:23.172707       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:58678->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0919 19:12:23.172798       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes) - error from a previous attempt: read tcp 192.168.49.2:58662->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0919 19:12:23.172837       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps) - error from a previous attempt: read tcp 192.168.49.2:58646->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0919 19:12:23.172874       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods) - error from a previous attempt: read tcp 192.168.49.2:58766->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0919 19:12:23.172909       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services) - error from a previous attempt: read tcp 192.168.49.2:58692->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0919 19:12:23.172942       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims) - error from a previous attempt: read tcp 192.168.49.2:58746->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0919 19:12:23.172976       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces) - error from a previous attempt: read tcp 192.168.49.2:58774->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0919 19:12:23.173011       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps) - error from a previous attempt: read tcp 192.168.49.2:58736->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0919 19:12:23.173048       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:58762->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0919 19:12:23.173084       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy) - error from a previous attempt: read tcp 192.168.49.2:58722->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0919 19:12:23.173119       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers) - error from a previous attempt: read tcp 192.168.49.2:58706->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0919 19:12:23.173152       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:58758->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0919 19:12:23.173183       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes) - error from a previous attempt: read tcp 192.168.49.2:58652->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	E0919 19:12:23.173227       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io) - error from a previous attempt: read tcp 192.168.49.2:58682->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError"
	
	
	==> kubelet <==
	Sep 19 19:12:17 ha-310211 kubelet[758]: E0919 19:12:17.545673     758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-310211_kube-system(9069f49906b31ce6b9270dd576c04d63)\"" pod="kube-system/kube-controller-manager-ha-310211" podUID="9069f49906b31ce6b9270dd576c04d63"
	Sep 19 19:12:19 ha-310211 kubelet[758]: I0919 19:12:19.897242     758 scope.go:117] "RemoveContainer" containerID="b43dee53c7c2a92ffca020512f0333c66528d678e522728758294bf2c72b95b3"
	Sep 19 19:12:19 ha-310211 kubelet[758]: I0919 19:12:19.899146     758 status_manager.go:851] "Failed to get status for pod" podUID="32e184648b3b5d40f61952a9faa84e91" pod="kube-system/kube-apiserver-ha-310211" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-310211\": dial tcp 192.168.49.254:8443: connect: connection refused"
	Sep 19 19:12:19 ha-310211 kubelet[758]: E0919 19:12:19.899276     758 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-ha-310211.17f6baf5d543610e\": dial tcp 192.168.49.254:8443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-ha-310211.17f6baf5d543610e  kube-system   2805 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-310211,UID:32e184648b3b5d40f61952a9faa84e91,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"registry.k8s.io/kube-apiserver:v1.31.1\" already present on machine,Source:EventSource{Component:kubelet,Host:ha-310211,},FirstTimestamp:2024-09-19 19:11:12 +0000 UTC,LastTimestamp:2024-09-19 19:12:19.898355972 +0000 UTC m=+74.445025276,Count:2,Type:Normal,EventTime:0001-01-01 00:00
:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-310211,}"
	Sep 19 19:12:22 ha-310211 kubelet[758]: E0919 19:12:22.981343     758 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:60816->192.168.49.254:8443: read: connection reset by peer" logger="UnhandledError"
	Sep 19 19:12:22 ha-310211 kubelet[758]: E0919 19:12:22.981433     758 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:60802->192.168.49.254:8443: read: connection reset by peer" logger="UnhandledError"
	Sep 19 19:12:22 ha-310211 kubelet[758]: E0919 19:12:22.981494     758 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:60788->192.168.49.254:8443: read: connection reset by peer" logger="UnhandledError"
	Sep 19 19:12:22 ha-310211 kubelet[758]: E0919 19:12:22.984218     758 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: unknown (get configmaps) - error from a previous attempt: read tcp 192.168.49.254:60796->192.168.49.254:8443: read: connection reset by peer" logger="UnhandledError"
	Sep 19 19:12:23 ha-310211 kubelet[758]: I0919 19:12:23.914147     758 scope.go:117] "RemoveContainer" containerID="17787b410981a0ef9caacc4bc27616cc1a61482fda66cc83a0390a12b3482dd1"
	Sep 19 19:12:25 ha-310211 kubelet[758]: E0919 19:12:25.692244     758 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726773145691900254,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:12:25 ha-310211 kubelet[758]: E0919 19:12:25.692286     758 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726773145691900254,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:12:28 ha-310211 kubelet[758]: I0919 19:12:28.619642     758 scope.go:117] "RemoveContainer" containerID="2757b9660315376694b6d58bcccddc152c9f12f1d1942d1051982d87ad1f6b13"
	Sep 19 19:12:28 ha-310211 kubelet[758]: E0919 19:12:28.619838     758 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-310211_kube-system(9069f49906b31ce6b9270dd576c04d63)\"" pod="kube-system/kube-controller-manager-ha-310211" podUID="9069f49906b31ce6b9270dd576c04d63"
	Sep 19 19:12:32 ha-310211 kubelet[758]: I0919 19:12:32.938272     758 scope.go:117] "RemoveContainer" containerID="bce6465da3642557902d23e89b7c1f479ae37a7a60e55efe92c67cfca71ac754"
	Sep 19 19:12:34 ha-310211 kubelet[758]: E0919 19:12:34.196923     758 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-310211?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Sep 19 19:12:35 ha-310211 kubelet[758]: E0919 19:12:35.693388     758 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726773155693163302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:12:35 ha-310211 kubelet[758]: E0919 19:12:35.693422     758 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726773155693163302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:12:39 ha-310211 kubelet[758]: I0919 19:12:39.620028     758 scope.go:117] "RemoveContainer" containerID="2757b9660315376694b6d58bcccddc152c9f12f1d1942d1051982d87ad1f6b13"
	Sep 19 19:12:44 ha-310211 kubelet[758]: E0919 19:12:44.197703     758 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-310211?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Sep 19 19:12:45 ha-310211 kubelet[758]: E0919 19:12:45.695523     758 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726773165695057179,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:12:45 ha-310211 kubelet[758]: E0919 19:12:45.696087     758 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726773165695057179,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:12:54 ha-310211 kubelet[758]: E0919 19:12:54.198238     758 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-310211?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Sep 19 19:12:55 ha-310211 kubelet[758]: E0919 19:12:55.699612     758 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726773175697738808,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:12:55 ha-310211 kubelet[758]: E0919 19:12:55.699651     758 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726773175697738808,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:147135,},InodesUsed:&UInt64Value{Value:69,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:13:04 ha-310211 kubelet[758]: E0919 19:13:04.199581     758 controller.go:195] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-310211?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-310211 -n ha-310211
helpers_test.go:261: (dbg) Run:  kubectl --context ha-310211 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (128.91s)

                                                
                                    

Test pass (294/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 12.81
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 13.36
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 6.42
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.07
18 TestDownloadOnly/v1.31.1/DeleteAll 13.35
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.58
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 183.11
31 TestAddons/serial/GCPAuth/Namespaces 0.23
35 TestAddons/parallel/InspektorGadget 10.88
39 TestAddons/parallel/CSI 57.55
40 TestAddons/parallel/Headlamp 16.73
41 TestAddons/parallel/CloudSpanner 5.62
42 TestAddons/parallel/LocalPath 10.31
43 TestAddons/parallel/NvidiaDevicePlugin 6.5
44 TestAddons/parallel/Yakd 10.75
45 TestAddons/StoppedEnableDisable 12.2
46 TestCertOptions 38.91
47 TestCertExpiration 253.61
49 TestForceSystemdFlag 40.36
50 TestForceSystemdEnv 38.38
56 TestErrorSpam/setup 26.69
57 TestErrorSpam/start 0.74
58 TestErrorSpam/status 1.06
59 TestErrorSpam/pause 1.84
60 TestErrorSpam/unpause 1.93
61 TestErrorSpam/stop 1.46
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 47.33
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 28.32
68 TestFunctional/serial/KubeContext 0.06
69 TestFunctional/serial/KubectlGetPods 0.09
72 TestFunctional/serial/CacheCmd/cache/add_remote 5.56
73 TestFunctional/serial/CacheCmd/cache/add_local 1.55
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
77 TestFunctional/serial/CacheCmd/cache/cache_reload 2.07
78 TestFunctional/serial/CacheCmd/cache/delete 0.12
79 TestFunctional/serial/MinikubeKubectlCmd 0.13
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
81 TestFunctional/serial/ExtraConfig 33.35
82 TestFunctional/serial/ComponentHealth 0.11
83 TestFunctional/serial/LogsCmd 1.72
84 TestFunctional/serial/LogsFileCmd 1.97
85 TestFunctional/serial/InvalidService 4.48
87 TestFunctional/parallel/ConfigCmd 0.46
88 TestFunctional/parallel/DashboardCmd 8.37
89 TestFunctional/parallel/DryRun 0.6
90 TestFunctional/parallel/InternationalLanguage 0.28
91 TestFunctional/parallel/StatusCmd 1.03
95 TestFunctional/parallel/ServiceCmdConnect 7.62
96 TestFunctional/parallel/AddonsCmd 0.14
97 TestFunctional/parallel/PersistentVolumeClaim 26.72
99 TestFunctional/parallel/SSHCmd 0.54
100 TestFunctional/parallel/CpCmd 2.06
102 TestFunctional/parallel/FileSync 0.34
103 TestFunctional/parallel/CertSync 2.2
107 TestFunctional/parallel/NodeLabels 0.12
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.77
111 TestFunctional/parallel/License 0.26
112 TestFunctional/parallel/Version/short 0.09
113 TestFunctional/parallel/Version/components 0.89
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.36
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.33
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
118 TestFunctional/parallel/ImageCommands/ImageBuild 6.61
119 TestFunctional/parallel/ImageCommands/Setup 0.72
120 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
121 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
122 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.77
124 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.2
125 TestFunctional/parallel/ServiceCmd/DeployApp 11.28
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.49
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.16
128 TestFunctional/parallel/ImageCommands/ImageRemove 0.95
129 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.86
130 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.6
132 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.5
133 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.32
136 TestFunctional/parallel/ServiceCmd/List 0.36
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.33
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
139 TestFunctional/parallel/ServiceCmd/Format 0.38
140 TestFunctional/parallel/ServiceCmd/URL 0.39
141 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
142 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
146 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
148 TestFunctional/parallel/ProfileCmd/profile_list 0.41
149 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
150 TestFunctional/parallel/MountCmd/any-port 7.87
151 TestFunctional/parallel/MountCmd/specific-port 1.98
152 TestFunctional/parallel/MountCmd/VerifyCleanup 2.57
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 167.73
160 TestMultiControlPlane/serial/DeployApp 8.05
161 TestMultiControlPlane/serial/PingHostFromPods 1.76
162 TestMultiControlPlane/serial/AddWorkerNode 67.68
163 TestMultiControlPlane/serial/NodeLabels 0.1
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.03
165 TestMultiControlPlane/serial/CopyFile 19.45
166 TestMultiControlPlane/serial/StopSecondaryNode 12.83
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.77
168 TestMultiControlPlane/serial/RestartSecondaryNode 25.03
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.32
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 193.74
171 TestMultiControlPlane/serial/DeleteSecondaryNode 12.8
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.75
173 TestMultiControlPlane/serial/StopCluster 35.88
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.76
176 TestMultiControlPlane/serial/AddSecondaryNode 70.5
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.01
181 TestJSONOutput/start/Command 46.75
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.75
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.68
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.89
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.28
206 TestKicCustomNetwork/create_custom_network 36.26
207 TestKicCustomNetwork/use_default_bridge_network 36.55
208 TestKicExistingNetwork 35.67
209 TestKicCustomSubnet 35.64
210 TestKicStaticIP 33.97
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 66.72
215 TestMountStart/serial/StartWithMountFirst 7.13
216 TestMountStart/serial/VerifyMountFirst 0.3
217 TestMountStart/serial/StartWithMountSecond 7
218 TestMountStart/serial/VerifyMountSecond 0.29
219 TestMountStart/serial/DeleteFirst 1.64
220 TestMountStart/serial/VerifyMountPostDelete 0.28
221 TestMountStart/serial/Stop 1.2
222 TestMountStart/serial/RestartStopped 7.8
223 TestMountStart/serial/VerifyMountPostStop 0.31
226 TestMultiNode/serial/FreshStart2Nodes 107.6
227 TestMultiNode/serial/DeployApp2Nodes 7.35
228 TestMultiNode/serial/PingHostFrom2Pods 0.98
229 TestMultiNode/serial/AddNode 28.68
230 TestMultiNode/serial/MultiNodeLabels 0.09
231 TestMultiNode/serial/ProfileList 0.69
232 TestMultiNode/serial/CopyFile 10.25
233 TestMultiNode/serial/StopNode 2.34
234 TestMultiNode/serial/StartAfterStop 10.38
235 TestMultiNode/serial/RestartKeepsNodes 110.57
236 TestMultiNode/serial/DeleteNode 5.75
237 TestMultiNode/serial/StopMultiNode 23.9
238 TestMultiNode/serial/RestartMultiNode 49.94
239 TestMultiNode/serial/ValidateNameConflict 35.71
244 TestPreload 136.6
246 TestScheduledStopUnix 104.91
249 TestInsufficientStorage 10.97
250 TestRunningBinaryUpgrade 64.86
252 TestKubernetesUpgrade 394.19
253 TestMissingContainerUpgrade 165.39
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
256 TestNoKubernetes/serial/StartWithK8s 38.13
257 TestNoKubernetes/serial/StartWithStopK8s 9.06
258 TestNoKubernetes/serial/Start 7.86
259 TestNoKubernetes/serial/VerifyK8sNotRunning 0.42
260 TestNoKubernetes/serial/ProfileList 2.72
261 TestNoKubernetes/serial/Stop 1.29
262 TestNoKubernetes/serial/StartNoArgs 7.79
263 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.35
264 TestStoppedBinaryUpgrade/Setup 0.83
265 TestStoppedBinaryUpgrade/Upgrade 72.29
266 TestStoppedBinaryUpgrade/MinikubeLogs 1.51
275 TestPause/serial/Start 48.8
276 TestPause/serial/SecondStartNoReconfiguration 22.1
277 TestPause/serial/Pause 0.82
278 TestPause/serial/VerifyStatus 0.39
279 TestPause/serial/Unpause 0.78
280 TestPause/serial/PauseAgain 0.88
281 TestPause/serial/DeletePaused 2.76
282 TestPause/serial/VerifyDeletedResources 0.39
290 TestNetworkPlugins/group/false 4.26
295 TestStartStop/group/old-k8s-version/serial/FirstStart 160.27
296 TestStartStop/group/old-k8s-version/serial/DeployApp 10.57
297 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.63
298 TestStartStop/group/old-k8s-version/serial/Stop 12.07
299 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
300 TestStartStop/group/old-k8s-version/serial/SecondStart 131.13
302 TestStartStop/group/no-preload/serial/FirstStart 73.43
303 TestStartStop/group/no-preload/serial/DeployApp 11.38
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.14
305 TestStartStop/group/no-preload/serial/Stop 12.08
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
307 TestStartStop/group/no-preload/serial/SecondStart 295.51
308 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
309 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.19
310 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.34
311 TestStartStop/group/old-k8s-version/serial/Pause 4.58
313 TestStartStop/group/embed-certs/serial/FirstStart 78.77
314 TestStartStop/group/embed-certs/serial/DeployApp 10.35
315 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.13
316 TestStartStop/group/embed-certs/serial/Stop 11.99
317 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
318 TestStartStop/group/embed-certs/serial/SecondStart 276.66
319 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
320 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.12
321 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
322 TestStartStop/group/no-preload/serial/Pause 3.13
324 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 53.52
325 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.44
326 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.15
327 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.02
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
329 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 266.81
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
332 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
333 TestStartStop/group/embed-certs/serial/Pause 3.09
335 TestStartStop/group/newest-cni/serial/FirstStart 34.79
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.19
338 TestStartStop/group/newest-cni/serial/Stop 1.27
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
340 TestStartStop/group/newest-cni/serial/SecondStart 15.47
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.3
344 TestStartStop/group/newest-cni/serial/Pause 3.2
345 TestNetworkPlugins/group/auto/Start 75.78
346 TestNetworkPlugins/group/auto/KubeletFlags 0.3
347 TestNetworkPlugins/group/auto/NetCatPod 11.29
348 TestNetworkPlugins/group/auto/DNS 0.21
349 TestNetworkPlugins/group/auto/Localhost 0.17
350 TestNetworkPlugins/group/auto/HairPin 0.17
351 TestNetworkPlugins/group/kindnet/Start 82.33
352 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
353 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
354 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
355 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.12
356 TestNetworkPlugins/group/calico/Start 68.08
357 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
358 TestNetworkPlugins/group/kindnet/KubeletFlags 0.41
359 TestNetworkPlugins/group/kindnet/NetCatPod 11.35
360 TestNetworkPlugins/group/kindnet/DNS 0.27
361 TestNetworkPlugins/group/kindnet/Localhost 0.21
362 TestNetworkPlugins/group/kindnet/HairPin 0.16
363 TestNetworkPlugins/group/custom-flannel/Start 62.43
364 TestNetworkPlugins/group/calico/ControllerPod 6.01
365 TestNetworkPlugins/group/calico/KubeletFlags 0.35
366 TestNetworkPlugins/group/calico/NetCatPod 14.35
367 TestNetworkPlugins/group/calico/DNS 0.21
368 TestNetworkPlugins/group/calico/Localhost 0.18
369 TestNetworkPlugins/group/calico/HairPin 0.21
370 TestNetworkPlugins/group/enable-default-cni/Start 77.63
371 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.41
372 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.33
373 TestNetworkPlugins/group/custom-flannel/DNS 0.24
374 TestNetworkPlugins/group/custom-flannel/Localhost 0.22
375 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
376 TestNetworkPlugins/group/flannel/Start 53.93
377 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
378 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.29
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
382 TestNetworkPlugins/group/flannel/ControllerPod 6.01
383 TestNetworkPlugins/group/flannel/KubeletFlags 0.37
384 TestNetworkPlugins/group/flannel/NetCatPod 11.35
385 TestNetworkPlugins/group/bridge/Start 77.32
386 TestNetworkPlugins/group/flannel/DNS 0.23
387 TestNetworkPlugins/group/flannel/Localhost 0.25
388 TestNetworkPlugins/group/flannel/HairPin 0.24
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
390 TestNetworkPlugins/group/bridge/NetCatPod 11.27
391 TestNetworkPlugins/group/bridge/DNS 0.18
392 TestNetworkPlugins/group/bridge/Localhost 0.15
393 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.20.0/json-events (12.81s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-975733 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-975733 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (12.808041667s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (12.81s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0919 18:39:39.722701  292666 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0919 18:39:39.722787  292666 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19664-287261/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-975733
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-975733: exit status 85 (66.814148ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-975733 | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC |          |
	|         | -p download-only-975733        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/19 18:39:26
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 18:39:26.953112  292672 out.go:345] Setting OutFile to fd 1 ...
	I0919 18:39:26.953565  292672 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:39:26.953601  292672 out.go:358] Setting ErrFile to fd 2...
	I0919 18:39:26.953622  292672 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:39:26.953920  292672 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-287261/.minikube/bin
	W0919 18:39:26.954101  292672 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19664-287261/.minikube/config/config.json: open /home/jenkins/minikube-integration/19664-287261/.minikube/config/config.json: no such file or directory
	I0919 18:39:26.954568  292672 out.go:352] Setting JSON to true
	I0919 18:39:26.955470  292672 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8499,"bootTime":1726762668,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0919 18:39:26.955571  292672 start.go:139] virtualization:  
	I0919 18:39:26.958983  292672 out.go:97] [download-only-975733] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0919 18:39:26.959175  292672 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19664-287261/.minikube/cache/preloaded-tarball: no such file or directory
	I0919 18:39:26.959276  292672 notify.go:220] Checking for updates...
	I0919 18:39:26.962330  292672 out.go:169] MINIKUBE_LOCATION=19664
	I0919 18:39:26.964047  292672 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 18:39:26.966093  292672 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19664-287261/kubeconfig
	I0919 18:39:26.968209  292672 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-287261/.minikube
	I0919 18:39:26.970008  292672 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0919 18:39:26.974292  292672 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0919 18:39:26.974672  292672 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 18:39:26.996242  292672 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0919 18:39:26.996346  292672 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 18:39:27.057459  292672 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:62 SystemTime:2024-09-19 18:39:27.046613966 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0919 18:39:27.057576  292672 docker.go:318] overlay module found
	I0919 18:39:27.060003  292672 out.go:97] Using the docker driver based on user configuration
	I0919 18:39:27.060038  292672 start.go:297] selected driver: docker
	I0919 18:39:27.060046  292672 start.go:901] validating driver "docker" against <nil>
	I0919 18:39:27.060184  292672 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 18:39:27.116902  292672 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:62 SystemTime:2024-09-19 18:39:27.107080024 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0919 18:39:27.117125  292672 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 18:39:27.117412  292672 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0919 18:39:27.117606  292672 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0919 18:39:27.119928  292672 out.go:169] Using Docker driver with root privileges
	I0919 18:39:27.121994  292672 cni.go:84] Creating CNI manager for ""
	I0919 18:39:27.122061  292672 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 18:39:27.122075  292672 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 18:39:27.122169  292672 start.go:340] cluster config:
	{Name:download-only-975733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-975733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 18:39:27.124355  292672 out.go:97] Starting "download-only-975733" primary control-plane node in "download-only-975733" cluster
	I0919 18:39:27.124374  292672 cache.go:121] Beginning downloading kic base image for docker with crio
	I0919 18:39:27.126533  292672 out.go:97] Pulling base image v0.0.45-1726589491-19662 ...
	I0919 18:39:27.126558  292672 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0919 18:39:27.126686  292672 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0919 18:39:27.145477  292672 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon, skipping pull
	I0919 18:39:27.145501  292672 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0919 18:39:27.145652  292672 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0919 18:39:27.145753  292672 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0919 18:39:27.182620  292672 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0919 18:39:27.182657  292672 cache.go:56] Caching tarball of preloaded images
	I0919 18:39:27.182820  292672 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0919 18:39:27.188174  292672 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0919 18:39:27.188206  292672 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0919 18:39:27.277451  292672 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/19664-287261/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0919 18:39:32.593122  292672 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0919 18:39:32.593325  292672 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19664-287261/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0919 18:39:33.791214  292672 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0919 18:39:33.791698  292672 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/download-only-975733/config.json ...
	I0919 18:39:33.791739  292672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/download-only-975733/config.json: {Name:mka941f117d9d1d86b713fdbb305cce76cc0a727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:33.791981  292672 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0919 18:39:33.792230  292672 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19664-287261/.minikube/cache/linux/arm64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-975733 host does not exist
	  To start a cluster, run: "minikube start -p download-only-975733"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (13.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-linux-arm64 delete --all: (13.36256509s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (13.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-975733
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (6.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-217912 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-217912 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.419253525s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (6.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0919 18:39:59.700820  292666 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I0919 18:39:59.700859  292666 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19664-287261/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-217912
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-217912: exit status 85 (70.397972ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-975733 | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC |                     |
	|         | -p download-only-975733        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| delete  | -p download-only-975733        | download-only-975733 | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| start   | -o=json --download-only        | download-only-217912 | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC |                     |
	|         | -p download-only-217912        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/19 18:39:53
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 18:39:53.318781  292931 out.go:345] Setting OutFile to fd 1 ...
	I0919 18:39:53.319034  292931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:39:53.319053  292931 out.go:358] Setting ErrFile to fd 2...
	I0919 18:39:53.319060  292931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:39:53.319500  292931 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-287261/.minikube/bin
	I0919 18:39:53.320444  292931 out.go:352] Setting JSON to true
	I0919 18:39:53.321320  292931 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8526,"bootTime":1726762668,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0919 18:39:53.321407  292931 start.go:139] virtualization:  
	I0919 18:39:53.324247  292931 out.go:97] [download-only-217912] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0919 18:39:53.324520  292931 notify.go:220] Checking for updates...
	I0919 18:39:53.326930  292931 out.go:169] MINIKUBE_LOCATION=19664
	I0919 18:39:53.328895  292931 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 18:39:53.331452  292931 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19664-287261/kubeconfig
	I0919 18:39:53.333502  292931 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-287261/.minikube
	I0919 18:39:53.335445  292931 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0919 18:39:53.339686  292931 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0919 18:39:53.340012  292931 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 18:39:53.364767  292931 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0919 18:39:53.364878  292931 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 18:39:53.429721  292931 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2024-09-19 18:39:53.420547716 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0919 18:39:53.429831  292931 docker.go:318] overlay module found
	I0919 18:39:53.432243  292931 out.go:97] Using the docker driver based on user configuration
	I0919 18:39:53.432285  292931 start.go:297] selected driver: docker
	I0919 18:39:53.432293  292931 start.go:901] validating driver "docker" against <nil>
	I0919 18:39:53.432405  292931 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 18:39:53.483379  292931 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2024-09-19 18:39:53.473935963 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0919 18:39:53.483590  292931 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 18:39:53.483894  292931 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0919 18:39:53.484055  292931 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0919 18:39:53.487001  292931 out.go:169] Using Docker driver with root privileges
	I0919 18:39:53.488766  292931 cni.go:84] Creating CNI manager for ""
	I0919 18:39:53.488833  292931 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 18:39:53.488847  292931 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 18:39:53.488929  292931 start.go:340] cluster config:
	{Name:download-only-217912 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-217912 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 18:39:53.491185  292931 out.go:97] Starting "download-only-217912" primary control-plane node in "download-only-217912" cluster
	I0919 18:39:53.491207  292931 cache.go:121] Beginning downloading kic base image for docker with crio
	I0919 18:39:53.493307  292931 out.go:97] Pulling base image v0.0.45-1726589491-19662 ...
	I0919 18:39:53.493340  292931 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 18:39:53.493519  292931 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0919 18:39:53.512207  292931 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon, skipping pull
	I0919 18:39:53.512231  292931 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0919 18:39:53.512347  292931 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0919 18:39:53.512370  292931 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0919 18:39:53.512377  292931 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0919 18:39:53.512386  292931 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0919 18:39:53.561929  292931 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I0919 18:39:53.561959  292931 cache.go:56] Caching tarball of preloaded images
	I0919 18:39:53.562127  292931 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 18:39:53.564066  292931 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0919 18:39:53.564087  292931 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 ...
	I0919 18:39:53.731653  292931 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:8285fc512c7462f100de137f91fcd0ae -> /home/jenkins/minikube-integration/19664-287261/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4
	I0919 18:39:58.147170  292931 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 ...
	I0919 18:39:58.147277  292931 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19664-287261/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-217912 host does not exist
	  To start a cluster, run: "minikube start -p download-only-217912"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (13.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-linux-arm64 delete --all: (13.346722294s)
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (13.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-217912
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
I0919 18:40:14.082801  292666 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-388144 --alsologtostderr --binary-mirror http://127.0.0.1:33855 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-388144" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-388144
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-971880
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-971880: exit status 85 (75.472479ms)

                                                
                                                
-- stdout --
	* Profile "addons-971880" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-971880"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-971880
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-971880: exit status 85 (84.365985ms)

                                                
                                                
-- stdout --
	* Profile "addons-971880" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-971880"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (183.11s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-971880 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-971880 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (3m3.114020729s)
--- PASS: TestAddons/Setup (183.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.23s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-971880 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-971880 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.23s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.88s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-xrcg4" [61a624af-e1a5-423b-b133-e57dd4587edb] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005361675s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-971880
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-971880: (5.870001756s)
--- PASS: TestAddons/parallel/InspektorGadget (10.88s)

                                                
                                    
x
+
TestAddons/parallel/CSI (57.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0919 18:51:21.756551  292666 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:567: csi-hostpath-driver pods stabilized in 11.749849ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-971880 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-971880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-971880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-971880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-971880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-971880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-971880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-971880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-971880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-971880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-971880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-971880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-971880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-971880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-971880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-971880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-971880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-971880 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-971880 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [5b10ff43-9e5c-42e8-8255-31f113b854bd] Pending
helpers_test.go:344: "task-pv-pod" [5b10ff43-9e5c-42e8-8255-31f113b854bd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [5b10ff43-9e5c-42e8-8255-31f113b854bd] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.004067658s
addons_test.go:590: (dbg) Run:  kubectl --context addons-971880 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-971880 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-971880 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-971880 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-971880 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-971880 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-971880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-971880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-971880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-971880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-971880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-971880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-971880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-971880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-971880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-971880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-971880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-971880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-971880 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [e6e00f55-f29e-4201-9dfc-62a397d9f35b] Pending
helpers_test.go:344: "task-pv-pod-restore" [e6e00f55-f29e-4201-9dfc-62a397d9f35b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [e6e00f55-f29e-4201-9dfc-62a397d9f35b] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003387562s
addons_test.go:632: (dbg) Run:  kubectl --context addons-971880 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-971880 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-971880 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-971880 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-971880 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.825396165s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-971880 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (57.55s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-971880 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-45txh" [09fb83a1-72c2-40ab-94d4-5a45c1da320e] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-45txh" [09fb83a1-72c2-40ab-94d4-5a45c1da320e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-45txh" [09fb83a1-72c2-40ab-94d4-5a45c1da320e] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003700656s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-971880 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-971880 addons disable headlamp --alsologtostderr -v=1: (5.746398037s)
--- PASS: TestAddons/parallel/Headlamp (16.73s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.62s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-wz2j4" [ff351b23-83cc-4965-8f07-98bec573ed15] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003812s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-971880
--- PASS: TestAddons/parallel/CloudSpanner (5.62s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.31s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-971880 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-971880 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-971880 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-971880 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-971880 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-971880 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-971880 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-971880 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [b16f5ec0-05f8-4295-8a11-81395dc87b3d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [b16f5ec0-05f8-4295-8a11-81395dc87b3d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [b16f5ec0-05f8-4295-8a11-81395dc87b3d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004649624s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-971880 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-971880 ssh "cat /opt/local-path-provisioner/pvc-3d62cb7c-5cd6-47f0-b923-9d3114eaf026_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-971880 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-971880 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-971880 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (10.31s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.5s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-6b6sb" [d2508241-1d3e-43e2-b635-ccd577d441ef] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003658299s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-971880
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.50s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-bfrtb" [1969ec01-6ae6-40fe-bf9f-511767dcf4ca] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003769893s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-971880 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-971880 addons disable yakd --alsologtostderr -v=1: (5.748665252s)
--- PASS: TestAddons/parallel/Yakd (10.75s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.2s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-971880
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-971880: (11.9347472s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-971880
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-971880
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-971880
--- PASS: TestAddons/StoppedEnableDisable (12.20s)

                                                
                                    
x
+
TestCertOptions (38.91s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-034723 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E0919 19:38:59.166006  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/functional-058102/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-034723 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (36.164256896s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-034723 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-034723 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-034723 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-034723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-034723
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-034723: (2.048158027s)
--- PASS: TestCertOptions (38.91s)

                                                
                                    
x
+
TestCertExpiration (253.61s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-800473 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-800473 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (42.468498364s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-800473 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-800473 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (28.626981971s)
helpers_test.go:175: Cleaning up "cert-expiration-800473" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-800473
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-800473: (2.509055395s)
--- PASS: TestCertExpiration (253.61s)

                                                
                                    
x
+
TestForceSystemdFlag (40.36s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-814768 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-814768 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.067809179s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-814768 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-814768" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-814768
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-814768: (2.898918834s)
--- PASS: TestForceSystemdFlag (40.36s)

                                                
                                    
x
+
TestForceSystemdEnv (38.38s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-669195 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0919 19:38:18.398654  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-669195 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (35.877938395s)
helpers_test.go:175: Cleaning up "force-systemd-env-669195" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-669195
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-669195: (2.506185286s)
--- PASS: TestForceSystemdEnv (38.38s)

                                                
                                    
x
+
TestErrorSpam/setup (26.69s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-075947 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-075947 --driver=docker  --container-runtime=crio
E0919 18:58:18.399848  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.crt: no such file or directory" logger="UnhandledError"
E0919 18:58:18.406262  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.crt: no such file or directory" logger="UnhandledError"
E0919 18:58:18.417695  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.crt: no such file or directory" logger="UnhandledError"
E0919 18:58:18.439102  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.crt: no such file or directory" logger="UnhandledError"
E0919 18:58:18.480524  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.crt: no such file or directory" logger="UnhandledError"
E0919 18:58:18.561930  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.crt: no such file or directory" logger="UnhandledError"
E0919 18:58:18.723444  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.crt: no such file or directory" logger="UnhandledError"
E0919 18:58:19.045153  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.crt: no such file or directory" logger="UnhandledError"
E0919 18:58:19.687229  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.crt: no such file or directory" logger="UnhandledError"
E0919 18:58:20.968585  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.crt: no such file or directory" logger="UnhandledError"
E0919 18:58:23.531613  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.crt: no such file or directory" logger="UnhandledError"
E0919 18:58:28.653368  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-075947 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-075947 --driver=docker  --container-runtime=crio: (26.687017224s)
--- PASS: TestErrorSpam/setup (26.69s)

                                                
                                    
x
+
TestErrorSpam/start (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-075947 --log_dir /tmp/nospam-075947 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-075947 --log_dir /tmp/nospam-075947 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-075947 --log_dir /tmp/nospam-075947 start --dry-run
--- PASS: TestErrorSpam/start (0.74s)

                                                
                                    
x
+
TestErrorSpam/status (1.06s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-075947 --log_dir /tmp/nospam-075947 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-075947 --log_dir /tmp/nospam-075947 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-075947 --log_dir /tmp/nospam-075947 status
--- PASS: TestErrorSpam/status (1.06s)

                                                
                                    
x
+
TestErrorSpam/pause (1.84s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-075947 --log_dir /tmp/nospam-075947 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-075947 --log_dir /tmp/nospam-075947 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-075947 --log_dir /tmp/nospam-075947 pause
--- PASS: TestErrorSpam/pause (1.84s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.93s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-075947 --log_dir /tmp/nospam-075947 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-075947 --log_dir /tmp/nospam-075947 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-075947 --log_dir /tmp/nospam-075947 unpause
--- PASS: TestErrorSpam/unpause (1.93s)

                                                
                                    
x
+
TestErrorSpam/stop (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-075947 --log_dir /tmp/nospam-075947 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-075947 --log_dir /tmp/nospam-075947 stop: (1.264794249s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-075947 --log_dir /tmp/nospam-075947 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-075947 --log_dir /tmp/nospam-075947 stop
--- PASS: TestErrorSpam/stop (1.46s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19664-287261/.minikube/files/etc/test/nested/copy/292666/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (47.33s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-058102 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0919 18:58:59.376742  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-058102 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (47.326619073s)
--- PASS: TestFunctional/serial/StartWithProxy (47.33s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (28.32s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0919 18:59:30.989897  292666 config.go:182] Loaded profile config "functional-058102": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-058102 --alsologtostderr -v=8
E0919 18:59:40.338758  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-058102 --alsologtostderr -v=8: (28.315164549s)
functional_test.go:663: soft start took 28.315844048s for "functional-058102" cluster.
I0919 18:59:59.305517  292666 config.go:182] Loaded profile config "functional-058102": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (28.32s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-058102 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (5.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-058102 cache add registry.k8s.io/pause:3.1: (2.495662537s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-058102 cache add registry.k8s.io/pause:3.3: (1.670005309s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-058102 cache add registry.k8s.io/pause:latest: (1.398834135s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (5.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-058102 /tmp/TestFunctionalserialCacheCmdcacheadd_local210686987/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 cache add minikube-local-cache-test:functional-058102
functional_test.go:1089: (dbg) Done: out/minikube-linux-arm64 -p functional-058102 cache add minikube-local-cache-test:functional-058102: (1.009165649s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 cache delete minikube-local-cache-test:functional-058102
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-058102
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.55s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-058102 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (295.957944ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-058102 cache reload: (1.114891262s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 kubectl -- --context functional-058102 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-058102 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.35s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-058102 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-058102 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.344725701s)
functional_test.go:761: restart took 33.344828331s for "functional-058102" cluster.
I0919 19:00:42.818443  292666 config.go:182] Loaded profile config "functional-058102": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (33.35s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-058102 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.72s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-058102 logs: (1.721225697s)
--- PASS: TestFunctional/serial/LogsCmd (1.72s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.97s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 logs --file /tmp/TestFunctionalserialLogsFileCmd725538825/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-058102 logs --file /tmp/TestFunctionalserialLogsFileCmd725538825/001/logs.txt: (1.965826933s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.97s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.48s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-058102 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-058102
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-058102: exit status 115 (424.461186ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30164 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-058102 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.48s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-058102 config get cpus: exit status 14 (73.679856ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-058102 config get cpus: exit status 14 (71.021076ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-058102 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-058102 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 322915: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.37s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-058102 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-058102 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (247.526338ms)

                                                
                                                
-- stdout --
	* [functional-058102] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19664-287261/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-287261/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 19:01:35.733993  322266 out.go:345] Setting OutFile to fd 1 ...
	I0919 19:01:35.734246  322266 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:01:35.734271  322266 out.go:358] Setting ErrFile to fd 2...
	I0919 19:01:35.734291  322266 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:01:35.734585  322266 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-287261/.minikube/bin
	I0919 19:01:35.734986  322266 out.go:352] Setting JSON to false
	I0919 19:01:35.736006  322266 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":9828,"bootTime":1726762668,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0919 19:01:35.736256  322266 start.go:139] virtualization:  
	I0919 19:01:35.739507  322266 out.go:177] * [functional-058102] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0919 19:01:35.742856  322266 notify.go:220] Checking for updates...
	I0919 19:01:35.743518  322266 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 19:01:35.746316  322266 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 19:01:35.749220  322266 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19664-287261/kubeconfig
	I0919 19:01:35.751353  322266 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-287261/.minikube
	I0919 19:01:35.753819  322266 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0919 19:01:35.756154  322266 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 19:01:35.759270  322266 config.go:182] Loaded profile config "functional-058102": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:01:35.759926  322266 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 19:01:35.796268  322266 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0919 19:01:35.796427  322266 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 19:01:35.871173  322266 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:62 SystemTime:2024-09-19 19:01:35.86095263 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0919 19:01:35.871292  322266 docker.go:318] overlay module found
	I0919 19:01:35.873939  322266 out.go:177] * Using the docker driver based on existing profile
	I0919 19:01:35.877374  322266 start.go:297] selected driver: docker
	I0919 19:01:35.877393  322266 start.go:901] validating driver "docker" against &{Name:functional-058102 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-058102 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 19:01:35.877505  322266 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 19:01:35.882374  322266 out.go:201] 
	W0919 19:01:35.884558  322266 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0919 19:01:35.886168  322266 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-058102 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-058102 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-058102 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (276.672028ms)

                                                
                                                
-- stdout --
	* [functional-058102] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19664-287261/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-287261/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 19:01:36.306151  322475 out.go:345] Setting OutFile to fd 1 ...
	I0919 19:01:36.306344  322475 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:01:36.307462  322475 out.go:358] Setting ErrFile to fd 2...
	I0919 19:01:36.307480  322475 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:01:36.308899  322475 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-287261/.minikube/bin
	I0919 19:01:36.309496  322475 out.go:352] Setting JSON to false
	I0919 19:01:36.310715  322475 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":9829,"bootTime":1726762668,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0919 19:01:36.310805  322475 start.go:139] virtualization:  
	I0919 19:01:36.315292  322475 out.go:177] * [functional-058102] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0919 19:01:36.317663  322475 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 19:01:36.317722  322475 notify.go:220] Checking for updates...
	I0919 19:01:36.323976  322475 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 19:01:36.326098  322475 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19664-287261/kubeconfig
	I0919 19:01:36.328383  322475 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-287261/.minikube
	I0919 19:01:36.330390  322475 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0919 19:01:36.333553  322475 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 19:01:36.336369  322475 config.go:182] Loaded profile config "functional-058102": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:01:36.336901  322475 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 19:01:36.398007  322475 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0919 19:01:36.398147  322475 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 19:01:36.498283  322475 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:62 SystemTime:2024-09-19 19:01:36.486449386 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0919 19:01:36.498385  322475 docker.go:318] overlay module found
	I0919 19:01:36.501126  322475 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0919 19:01:36.503491  322475 start.go:297] selected driver: docker
	I0919 19:01:36.503515  322475 start.go:901] validating driver "docker" against &{Name:functional-058102 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-058102 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 19:01:36.503633  322475 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 19:01:36.506486  322475 out.go:201] 
	W0919 19:01:36.509172  322475 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0919 19:01:36.512855  322475 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-058102 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-058102 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-9cwqw" [a0908412-67dc-4dc7-a82d-407d2e2fa3a5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-9cwqw" [a0908412-67dc-4dc7-a82d-407d2e2fa3a5] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003629715s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:32314
functional_test.go:1675: http://192.168.49.2:32314: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-9cwqw

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32314
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.62s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [dfda4720-3316-4086-bdf1-97925ca7a1b7] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004126335s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-058102 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-058102 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-058102 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-058102 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ae08dd74-03d1-4d22-8070-eaa76d6c5228] Pending
helpers_test.go:344: "sp-pod" [ae08dd74-03d1-4d22-8070-eaa76d6c5228] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ae08dd74-03d1-4d22-8070-eaa76d6c5228] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.004685372s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-058102 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-058102 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-058102 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d58c8554-813f-458d-af27-5090be2bc064] Pending
helpers_test.go:344: "sp-pod" [d58c8554-813f-458d-af27-5090be2bc064] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d58c8554-813f-458d-af27-5090be2bc064] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.004449356s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-058102 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.72s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 ssh -n functional-058102 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 cp functional-058102:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3330379612/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 ssh -n functional-058102 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 ssh -n functional-058102 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/292666/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 ssh "sudo cat /etc/test/nested/copy/292666/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/292666.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 ssh "sudo cat /etc/ssl/certs/292666.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/292666.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 ssh "sudo cat /usr/share/ca-certificates/292666.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/2926662.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 ssh "sudo cat /etc/ssl/certs/2926662.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/2926662.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 ssh "sudo cat /usr/share/ca-certificates/2926662.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-058102 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-058102 ssh "sudo systemctl is-active docker": exit status 1 (376.407951ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-058102 ssh "sudo systemctl is-active containerd": exit status 1 (392.216273ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-058102 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-058102
localhost/kicbase/echo-server:functional-058102
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-058102 image ls --format short --alsologtostderr:
I0919 19:01:38.193108  322879 out.go:345] Setting OutFile to fd 1 ...
I0919 19:01:38.193298  322879 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:01:38.193309  322879 out.go:358] Setting ErrFile to fd 2...
I0919 19:01:38.193336  322879 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:01:38.194320  322879 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-287261/.minikube/bin
I0919 19:01:38.195030  322879 config.go:182] Loaded profile config "functional-058102": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0919 19:01:38.195154  322879 config.go:182] Loaded profile config "functional-058102": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0919 19:01:38.195751  322879 cli_runner.go:164] Run: docker container inspect functional-058102 --format={{.State.Status}}
I0919 19:01:38.231674  322879 ssh_runner.go:195] Run: systemctl --version
I0919 19:01:38.231730  322879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-058102
I0919 19:01:38.253641  322879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/functional-058102/id_rsa Username:docker}
I0919 19:01:38.377774  322879 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-058102 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 6a23fa8fd2b78 | 90.3MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | 3.10               | afb61768ce381 | 520kB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| docker.io/library/nginx                 | latest             | 195245f0c7927 | 197MB  |
| localhost/minikube-local-cache-test     | functional-058102  | d6d4e5fb3bdc3 | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | 2f6c962e7b831 | 61.6MB |
| registry.k8s.io/kube-apiserver          | v1.31.1            | d3f53a98c0a9d | 92.6MB |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 279f381cb3736 | 86.9MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/kube-proxy              | v1.31.1            | 24a140c548c07 | 96MB   |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| docker.io/library/nginx                 | alpine             | b887aca7aed61 | 48.4MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| localhost/kicbase/echo-server           | functional-058102  | ce2d2cda2d858 | 4.79MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 27e3830e14027 | 140MB  |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 7f8aa378bb47d | 67MB   |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-058102 image ls --format table --alsologtostderr:
I0919 19:01:38.848283  322987 out.go:345] Setting OutFile to fd 1 ...
I0919 19:01:38.848469  322987 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:01:38.848482  322987 out.go:358] Setting ErrFile to fd 2...
I0919 19:01:38.848488  322987 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:01:38.848770  322987 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-287261/.minikube/bin
I0919 19:01:38.849470  322987 config.go:182] Loaded profile config "functional-058102": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0919 19:01:38.849631  322987 config.go:182] Loaded profile config "functional-058102": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0919 19:01:38.850159  322987 cli_runner.go:164] Run: docker container inspect functional-058102 --format={{.State.Status}}
I0919 19:01:38.869206  322987 ssh_runner.go:195] Run: systemctl --version
I0919 19:01:38.869263  322987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-058102
I0919 19:01:38.891018  322987 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/functional-058102/id_rsa Username:docker}
I0919 19:01:38.994772  322987 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-058102 image ls --format json --alsologtostderr:
[{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a","registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139912446"},{"id":"195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3","docker.io/library/nginx@sha256:9f661996f4d1cea788f329b8145660a1124a5a94eec8cea1dba0d564423ad171"],"repoTags":["docker.io/library/nginx:latest"],"size":"197172029"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-058102"],"size":"4788229"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05
b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb","registry.k8s.io/kube-apiserver@sha256:e3a40e6c6e99ba4a4d72432b3eda702099a2926e49d4afeb6138f2d95e6371ef"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"92632544"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"86930758"},{"id":"b887aca7aed6134b029401507d27a
c9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":["docker.io/library/nginx@sha256:19db381c08a95b2040d5637a65c7a59af6c2f21444b0c8730505280a0255fb53","docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf"],"repoTags":["docker.io/library/nginx:alpine"],"size":"48375489"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6","registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61647114"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:65212209347a96b08a97e679b98dca46885f09cf3a53e8d13b28d2c083a5b690","registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"],"repoTags":["registry.k8s.io/kube-sc
heduler:v1.31.1"],"size":"67007814"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"519877"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab31
1a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"d6d4e5fb3bdc3bd2f3301c2edbbf40f63c242975de744701d8c9a9769abbbc6f","repoDigests":["localhost/minikube-local-cache-test@sha256:70c93c5ea424070efa68b0ea5005371dbf94a17665411ddb024e3988d0cb00cd"],"repoTags":["localhost/minikube-local-cache-test:functional-058102"],"size":"3330"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f3
9a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:7b3bf9f1e260ccb1fd543570e1e9869a373f716fb050cd23a6a2771aa4e06ae9"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"95951255"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"90295858"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-058102 image ls --format json --alsologtostderr:
I0919 19:01:38.580647  322919 out.go:345] Setting OutFile to fd 1 ...
I0919 19:01:38.580930  322919 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:01:38.580960  322919 out.go:358] Setting ErrFile to fd 2...
I0919 19:01:38.580996  322919 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:01:38.581387  322919 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-287261/.minikube/bin
I0919 19:01:38.582325  322919 config.go:182] Loaded profile config "functional-058102": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0919 19:01:38.582583  322919 config.go:182] Loaded profile config "functional-058102": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0919 19:01:38.583749  322919 cli_runner.go:164] Run: docker container inspect functional-058102 --format={{.State.Status}}
I0919 19:01:38.605658  322919 ssh_runner.go:195] Run: systemctl --version
I0919 19:01:38.605707  322919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-058102
I0919 19:01:38.631624  322919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/functional-058102/id_rsa Username:docker}
I0919 19:01:38.738433  322919 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-058102 image ls --format yaml --alsologtostderr:
- id: d6d4e5fb3bdc3bd2f3301c2edbbf40f63c242975de744701d8c9a9769abbbc6f
repoDigests:
- localhost/minikube-local-cache-test@sha256:70c93c5ea424070efa68b0ea5005371dbf94a17665411ddb024e3988d0cb00cd
repoTags:
- localhost/minikube-local-cache-test:functional-058102
size: "3330"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61647114"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "519877"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:4d39335073da9a0b82be8e01028f0aa75aff16caff2e2d8889d0effd579a6f64
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "90295858"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
- registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139912446"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
- docker.io/library/nginx@sha256:9f661996f4d1cea788f329b8145660a1124a5a94eec8cea1dba0d564423ad171
repoTags:
- docker.io/library/nginx:latest
size: "197172029"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
- registry.k8s.io/kube-apiserver@sha256:e3a40e6c6e99ba4a4d72432b3eda702099a2926e49d4afeb6138f2d95e6371ef
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "92632544"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests:
- docker.io/library/nginx@sha256:19db381c08a95b2040d5637a65c7a59af6c2f21444b0c8730505280a0255fb53
- docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf
repoTags:
- docker.io/library/nginx:alpine
size: "48375489"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-058102
size: "4788229"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:a9a0505b7d0caca0edd18e37bacc9425b2c8824546b26f5b286e8cb144669849
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "86930758"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:7b3bf9f1e260ccb1fd543570e1e9869a373f716fb050cd23a6a2771aa4e06ae9
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "95951255"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:65212209347a96b08a97e679b98dca46885f09cf3a53e8d13b28d2c083a5b690
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67007814"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-058102 image ls --format yaml --alsologtostderr:
I0919 19:01:39.126403  323018 out.go:345] Setting OutFile to fd 1 ...
I0919 19:01:39.126611  323018 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:01:39.126640  323018 out.go:358] Setting ErrFile to fd 2...
I0919 19:01:39.126662  323018 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:01:39.126947  323018 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-287261/.minikube/bin
I0919 19:01:39.127637  323018 config.go:182] Loaded profile config "functional-058102": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0919 19:01:39.127831  323018 config.go:182] Loaded profile config "functional-058102": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0919 19:01:39.128444  323018 cli_runner.go:164] Run: docker container inspect functional-058102 --format={{.State.Status}}
I0919 19:01:39.149064  323018 ssh_runner.go:195] Run: systemctl --version
I0919 19:01:39.149118  323018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-058102
I0919 19:01:39.178568  323018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/functional-058102/id_rsa Username:docker}
I0919 19:01:39.276843  323018 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-058102 ssh pgrep buildkitd: exit status 1 (408.925823ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 image build -t localhost/my-image:functional-058102 testdata/build --alsologtostderr
2024/09/19 19:01:44 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-058102 image build -t localhost/my-image:functional-058102 testdata/build --alsologtostderr: (5.95418908s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-arm64 -p functional-058102 image build -t localhost/my-image:functional-058102 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 9afc21fcc0d
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-058102
--> f4748d499be
Successfully tagged localhost/my-image:functional-058102
f4748d499bece582442a3a624634624a3b3b8710db5906ea07c4946b488722cc
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-058102 image build -t localhost/my-image:functional-058102 testdata/build --alsologtostderr:
I0919 19:01:39.828141  323111 out.go:345] Setting OutFile to fd 1 ...
I0919 19:01:39.828887  323111 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:01:39.828935  323111 out.go:358] Setting ErrFile to fd 2...
I0919 19:01:39.828955  323111 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:01:39.829348  323111 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-287261/.minikube/bin
I0919 19:01:39.830280  323111 config.go:182] Loaded profile config "functional-058102": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0919 19:01:39.832705  323111 config.go:182] Loaded profile config "functional-058102": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0919 19:01:39.833292  323111 cli_runner.go:164] Run: docker container inspect functional-058102 --format={{.State.Status}}
I0919 19:01:39.864318  323111 ssh_runner.go:195] Run: systemctl --version
I0919 19:01:39.864383  323111 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-058102
I0919 19:01:39.887346  323111 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/functional-058102/id_rsa Username:docker}
I0919 19:01:40.002798  323111 build_images.go:161] Building image from path: /tmp/build.2827036664.tar
I0919 19:01:40.002879  323111 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0919 19:01:40.062900  323111 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2827036664.tar
I0919 19:01:40.071874  323111 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2827036664.tar: stat -c "%s %y" /var/lib/minikube/build/build.2827036664.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2827036664.tar': No such file or directory
I0919 19:01:40.071967  323111 ssh_runner.go:362] scp /tmp/build.2827036664.tar --> /var/lib/minikube/build/build.2827036664.tar (3072 bytes)
I0919 19:01:40.137802  323111 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2827036664
I0919 19:01:40.155366  323111 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2827036664 -xf /var/lib/minikube/build/build.2827036664.tar
I0919 19:01:40.169505  323111 crio.go:315] Building image: /var/lib/minikube/build/build.2827036664
I0919 19:01:40.169634  323111 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-058102 /var/lib/minikube/build/build.2827036664 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0919 19:01:45.683649  323111 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-058102 /var/lib/minikube/build/build.2827036664 --cgroup-manager=cgroupfs: (5.513957406s)
I0919 19:01:45.683732  323111 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2827036664
I0919 19:01:45.693345  323111 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2827036664.tar
I0919 19:01:45.703006  323111 build_images.go:217] Built localhost/my-image:functional-058102 from /tmp/build.2827036664.tar
I0919 19:01:45.703041  323111 build_images.go:133] succeeded building to: functional-058102
I0919 19:01:45.703047  323111 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-058102
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 image load --daemon kicbase/echo-server:functional-058102 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-058102 image load --daemon kicbase/echo-server:functional-058102 --alsologtostderr: (1.458178019s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 image load --daemon kicbase/echo-server:functional-058102 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-058102 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-058102 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-h8tf5" [8530899a-93a3-4497-9643-cd8d2263c181] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-h8tf5" [8530899a-93a3-4497-9643-cd8d2263c181] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.003857661s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-058102
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 image load --daemon kicbase/echo-server:functional-058102 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 image save kicbase/echo-server:functional-058102 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-arm64 -p functional-058102 image save kicbase/echo-server:functional-058102 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr: (2.164434764s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 image rm kicbase/echo-server:functional-058102 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-058102
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 image save --daemon kicbase/echo-server:functional-058102 --alsologtostderr
E0919 19:01:02.261310  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-058102
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-058102 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-058102 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-058102 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-058102 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 319502: os: process already finished
helpers_test.go:502: unable to terminate pid 319380: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-058102 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-058102 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [bed702fa-de75-491d-8ecf-7923bbe52034] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [bed702fa-de75-491d-8ecf-7923bbe52034] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.004129639s
I0919 19:01:13.694875  292666 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 service list -o json
functional_test.go:1494: Took "330.332273ms" to run "out/minikube-linux-arm64 -p functional-058102 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:32244
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:32244
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-058102 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.101.8.246 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-058102 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "361.128087ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "50.535227ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "357.417058ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "54.779802ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-058102 /tmp/TestFunctionalparallelMountCmdany-port2719015524/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726772483951793037" to /tmp/TestFunctionalparallelMountCmdany-port2719015524/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726772483951793037" to /tmp/TestFunctionalparallelMountCmdany-port2719015524/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726772483951793037" to /tmp/TestFunctionalparallelMountCmdany-port2719015524/001/test-1726772483951793037
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-058102 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (393.847542ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 19:01:24.345914  292666 retry.go:31] will retry after 324.264013ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 19 19:01 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 19 19:01 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 19 19:01 test-1726772483951793037
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 ssh cat /mount-9p/test-1726772483951793037
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-058102 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [2881603e-7174-4429-a863-5710efcb5b1a] Pending
helpers_test.go:344: "busybox-mount" [2881603e-7174-4429-a863-5710efcb5b1a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [2881603e-7174-4429-a863-5710efcb5b1a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [2881603e-7174-4429-a863-5710efcb5b1a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003585515s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-058102 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-058102 /tmp/TestFunctionalparallelMountCmdany-port2719015524/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.87s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-058102 /tmp/TestFunctionalparallelMountCmdspecific-port222366689/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-058102 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (346.692538ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 19:01:32.163531  292666 retry.go:31] will retry after 568.375409ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-058102 /tmp/TestFunctionalparallelMountCmdspecific-port222366689/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-058102 ssh "sudo umount -f /mount-9p": exit status 1 (282.612314ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-058102 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-058102 /tmp/TestFunctionalparallelMountCmdspecific-port222366689/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-058102 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3582913304/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-058102 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3582913304/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-058102 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3582913304/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-058102 ssh "findmnt -T" /mount1: exit status 1 (709.727176ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 19:01:34.513558  292666 retry.go:31] will retry after 738.070861ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-058102 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-058102 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-058102 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3582913304/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-058102 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3582913304/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-058102 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3582913304/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.57s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-058102
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-058102
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-058102
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (167.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-310211 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0919 19:03:18.398223  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:03:46.103587  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-310211 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m46.894808166s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (167.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-310211 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-310211 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-310211 -- rollout status deployment/busybox: (5.08967769s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-310211 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-310211 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-310211 -- exec busybox-7dff88458-8r4j5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-310211 -- exec busybox-7dff88458-m8hnh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-310211 -- exec busybox-7dff88458-nlhw4 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-310211 -- exec busybox-7dff88458-8r4j5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-310211 -- exec busybox-7dff88458-m8hnh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-310211 -- exec busybox-7dff88458-nlhw4 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-310211 -- exec busybox-7dff88458-8r4j5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-310211 -- exec busybox-7dff88458-m8hnh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-310211 -- exec busybox-7dff88458-nlhw4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-310211 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-310211 -- exec busybox-7dff88458-8r4j5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-310211 -- exec busybox-7dff88458-8r4j5 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-310211 -- exec busybox-7dff88458-m8hnh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-310211 -- exec busybox-7dff88458-m8hnh -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-310211 -- exec busybox-7dff88458-nlhw4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-310211 -- exec busybox-7dff88458-nlhw4 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (67.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-310211 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-310211 -v=7 --alsologtostderr: (1m6.659054279s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-310211 status -v=7 --alsologtostderr: (1.017765001s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (67.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-310211 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.029553208s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 status --output json -v=7 --alsologtostderr
E0919 19:05:56.095694  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/functional-058102/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:05:56.102281  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/functional-058102/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:05:56.113785  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/functional-058102/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:05:56.135587  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/functional-058102/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:05:56.177722  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/functional-058102/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:05:56.260233  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/functional-058102/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-310211 status --output json -v=7 --alsologtostderr: (1.025220133s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 cp testdata/cp-test.txt ha-310211:/home/docker/cp-test.txt
E0919 19:05:56.422028  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/functional-058102/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 ssh -n ha-310211 "sudo cat /home/docker/cp-test.txt"
E0919 19:05:56.743921  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/functional-058102/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 cp ha-310211:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile192546706/001/cp-test_ha-310211.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 ssh -n ha-310211 "sudo cat /home/docker/cp-test.txt"
E0919 19:05:57.388120  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/functional-058102/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 cp ha-310211:/home/docker/cp-test.txt ha-310211-m02:/home/docker/cp-test_ha-310211_ha-310211-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 ssh -n ha-310211 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 ssh -n ha-310211-m02 "sudo cat /home/docker/cp-test_ha-310211_ha-310211-m02.txt"
E0919 19:05:58.670085  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/functional-058102/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 cp ha-310211:/home/docker/cp-test.txt ha-310211-m03:/home/docker/cp-test_ha-310211_ha-310211-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 ssh -n ha-310211 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 ssh -n ha-310211-m03 "sudo cat /home/docker/cp-test_ha-310211_ha-310211-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 cp ha-310211:/home/docker/cp-test.txt ha-310211-m04:/home/docker/cp-test_ha-310211_ha-310211-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 ssh -n ha-310211 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 ssh -n ha-310211-m04 "sudo cat /home/docker/cp-test_ha-310211_ha-310211-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 cp testdata/cp-test.txt ha-310211-m02:/home/docker/cp-test.txt
E0919 19:06:01.232268  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/functional-058102/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 ssh -n ha-310211-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 cp ha-310211-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile192546706/001/cp-test_ha-310211-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 ssh -n ha-310211-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 cp ha-310211-m02:/home/docker/cp-test.txt ha-310211:/home/docker/cp-test_ha-310211-m02_ha-310211.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 ssh -n ha-310211-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 ssh -n ha-310211 "sudo cat /home/docker/cp-test_ha-310211-m02_ha-310211.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 cp ha-310211-m02:/home/docker/cp-test.txt ha-310211-m03:/home/docker/cp-test_ha-310211-m02_ha-310211-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 ssh -n ha-310211-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 ssh -n ha-310211-m03 "sudo cat /home/docker/cp-test_ha-310211-m02_ha-310211-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 cp ha-310211-m02:/home/docker/cp-test.txt ha-310211-m04:/home/docker/cp-test_ha-310211-m02_ha-310211-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 ssh -n ha-310211-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 ssh -n ha-310211-m04 "sudo cat /home/docker/cp-test_ha-310211-m02_ha-310211-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 cp testdata/cp-test.txt ha-310211-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 ssh -n ha-310211-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 cp ha-310211-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile192546706/001/cp-test_ha-310211-m03.txt
E0919 19:06:06.354503  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/functional-058102/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 ssh -n ha-310211-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 cp ha-310211-m03:/home/docker/cp-test.txt ha-310211:/home/docker/cp-test_ha-310211-m03_ha-310211.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 ssh -n ha-310211-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 ssh -n ha-310211 "sudo cat /home/docker/cp-test_ha-310211-m03_ha-310211.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 cp ha-310211-m03:/home/docker/cp-test.txt ha-310211-m02:/home/docker/cp-test_ha-310211-m03_ha-310211-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 ssh -n ha-310211-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 ssh -n ha-310211-m02 "sudo cat /home/docker/cp-test_ha-310211-m03_ha-310211-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 cp ha-310211-m03:/home/docker/cp-test.txt ha-310211-m04:/home/docker/cp-test_ha-310211-m03_ha-310211-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 ssh -n ha-310211-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 ssh -n ha-310211-m04 "sudo cat /home/docker/cp-test_ha-310211-m03_ha-310211-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 cp testdata/cp-test.txt ha-310211-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 ssh -n ha-310211-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 cp ha-310211-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile192546706/001/cp-test_ha-310211-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 ssh -n ha-310211-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 cp ha-310211-m04:/home/docker/cp-test.txt ha-310211:/home/docker/cp-test_ha-310211-m04_ha-310211.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 ssh -n ha-310211-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 ssh -n ha-310211 "sudo cat /home/docker/cp-test_ha-310211-m04_ha-310211.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 cp ha-310211-m04:/home/docker/cp-test.txt ha-310211-m02:/home/docker/cp-test_ha-310211-m04_ha-310211-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 ssh -n ha-310211-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 ssh -n ha-310211-m02 "sudo cat /home/docker/cp-test_ha-310211-m04_ha-310211-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 cp ha-310211-m04:/home/docker/cp-test.txt ha-310211-m03:/home/docker/cp-test_ha-310211-m04_ha-310211-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 ssh -n ha-310211-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 ssh -n ha-310211-m03 "sudo cat /home/docker/cp-test_ha-310211-m04_ha-310211-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 node stop m02 -v=7 --alsologtostderr
E0919 19:06:16.596719  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/functional-058102/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-310211 node stop m02 -v=7 --alsologtostderr: (12.041770918s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-310211 status -v=7 --alsologtostderr: exit status 7 (785.465835ms)

                                                
                                                
-- stdout --
	ha-310211
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-310211-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-310211-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-310211-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 19:06:26.836721  338942 out.go:345] Setting OutFile to fd 1 ...
	I0919 19:06:26.836868  338942 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:06:26.836879  338942 out.go:358] Setting ErrFile to fd 2...
	I0919 19:06:26.836885  338942 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:06:26.837111  338942 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-287261/.minikube/bin
	I0919 19:06:26.837288  338942 out.go:352] Setting JSON to false
	I0919 19:06:26.837324  338942 mustload.go:65] Loading cluster: ha-310211
	I0919 19:06:26.837375  338942 notify.go:220] Checking for updates...
	I0919 19:06:26.837749  338942 config.go:182] Loaded profile config "ha-310211": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:06:26.837774  338942 status.go:174] checking status of ha-310211 ...
	I0919 19:06:26.838505  338942 cli_runner.go:164] Run: docker container inspect ha-310211 --format={{.State.Status}}
	I0919 19:06:26.860612  338942 status.go:364] ha-310211 host status = "Running" (err=<nil>)
	I0919 19:06:26.860636  338942 host.go:66] Checking if "ha-310211" exists ...
	I0919 19:06:26.861021  338942 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-310211
	I0919 19:06:26.886430  338942 host.go:66] Checking if "ha-310211" exists ...
	I0919 19:06:26.886757  338942 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 19:06:26.886810  338942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-310211
	I0919 19:06:26.904551  338942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/ha-310211/id_rsa Username:docker}
	I0919 19:06:27.025250  338942 ssh_runner.go:195] Run: systemctl --version
	I0919 19:06:27.030905  338942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 19:06:27.044483  338942 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 19:06:27.120600  338942 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:5 ContainersRunning:4 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:true NGoroutines:81 SystemTime:2024-09-19 19:06:27.109640507 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0919 19:06:27.121229  338942 kubeconfig.go:125] found "ha-310211" server: "https://192.168.49.254:8443"
	I0919 19:06:27.121265  338942 api_server.go:166] Checking apiserver status ...
	I0919 19:06:27.121320  338942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 19:06:27.134631  338942 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1362/cgroup
	I0919 19:06:27.146940  338942 api_server.go:182] apiserver freezer: "2:freezer:/docker/366d82d25ff03f8c1294620ed4b590e696c703b69c11f76e2deb36404c718ce0/crio/crio-c2b52e9509a03a0ff4ea2c1d3cd500788428377a5fe6133608eeebd9763dd7fe"
	I0919 19:06:27.147012  338942 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/366d82d25ff03f8c1294620ed4b590e696c703b69c11f76e2deb36404c718ce0/crio/crio-c2b52e9509a03a0ff4ea2c1d3cd500788428377a5fe6133608eeebd9763dd7fe/freezer.state
	I0919 19:06:27.156981  338942 api_server.go:204] freezer state: "THAWED"
	I0919 19:06:27.157008  338942 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 19:06:27.164952  338942 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 19:06:27.164979  338942 status.go:456] ha-310211 apiserver status = Running (err=<nil>)
	I0919 19:06:27.164991  338942 status.go:176] ha-310211 status: &{Name:ha-310211 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 19:06:27.165008  338942 status.go:174] checking status of ha-310211-m02 ...
	I0919 19:06:27.165350  338942 cli_runner.go:164] Run: docker container inspect ha-310211-m02 --format={{.State.Status}}
	I0919 19:06:27.184444  338942 status.go:364] ha-310211-m02 host status = "Stopped" (err=<nil>)
	I0919 19:06:27.184470  338942 status.go:377] host is not running, skipping remaining checks
	I0919 19:06:27.184478  338942 status.go:176] ha-310211-m02 status: &{Name:ha-310211-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 19:06:27.184499  338942 status.go:174] checking status of ha-310211-m03 ...
	I0919 19:06:27.184817  338942 cli_runner.go:164] Run: docker container inspect ha-310211-m03 --format={{.State.Status}}
	I0919 19:06:27.201113  338942 status.go:364] ha-310211-m03 host status = "Running" (err=<nil>)
	I0919 19:06:27.201140  338942 host.go:66] Checking if "ha-310211-m03" exists ...
	I0919 19:06:27.201463  338942 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-310211-m03
	I0919 19:06:27.219192  338942 host.go:66] Checking if "ha-310211-m03" exists ...
	I0919 19:06:27.219509  338942 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 19:06:27.219558  338942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-310211-m03
	I0919 19:06:27.237261  338942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/ha-310211-m03/id_rsa Username:docker}
	I0919 19:06:27.337587  338942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 19:06:27.351063  338942 kubeconfig.go:125] found "ha-310211" server: "https://192.168.49.254:8443"
	I0919 19:06:27.351095  338942 api_server.go:166] Checking apiserver status ...
	I0919 19:06:27.351139  338942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 19:06:27.362978  338942 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1321/cgroup
	I0919 19:06:27.372723  338942 api_server.go:182] apiserver freezer: "2:freezer:/docker/556971c5a36e872b836e1033d633ac53c1d8d58ef69c141fad2d7b31471d73e6/crio/crio-70b79e3090f18b00b41422983132b5f8c05111faa0d0d65b34510ddabb15efb1"
	I0919 19:06:27.372818  338942 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/556971c5a36e872b836e1033d633ac53c1d8d58ef69c141fad2d7b31471d73e6/crio/crio-70b79e3090f18b00b41422983132b5f8c05111faa0d0d65b34510ddabb15efb1/freezer.state
	I0919 19:06:27.384422  338942 api_server.go:204] freezer state: "THAWED"
	I0919 19:06:27.384451  338942 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 19:06:27.392067  338942 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 19:06:27.392097  338942 status.go:456] ha-310211-m03 apiserver status = Running (err=<nil>)
	I0919 19:06:27.392177  338942 status.go:176] ha-310211-m03 status: &{Name:ha-310211-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 19:06:27.392194  338942 status.go:174] checking status of ha-310211-m04 ...
	I0919 19:06:27.392502  338942 cli_runner.go:164] Run: docker container inspect ha-310211-m04 --format={{.State.Status}}
	I0919 19:06:27.409555  338942 status.go:364] ha-310211-m04 host status = "Running" (err=<nil>)
	I0919 19:06:27.409581  338942 host.go:66] Checking if "ha-310211-m04" exists ...
	I0919 19:06:27.410317  338942 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-310211-m04
	I0919 19:06:27.431865  338942 host.go:66] Checking if "ha-310211-m04" exists ...
	I0919 19:06:27.432306  338942 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 19:06:27.432358  338942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-310211-m04
	I0919 19:06:27.451077  338942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/ha-310211-m04/id_rsa Username:docker}
	I0919 19:06:27.549426  338942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 19:06:27.562075  338942 status.go:176] ha-310211-m04 status: &{Name:ha-310211-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (25.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 node start m02 -v=7 --alsologtostderr
E0919 19:06:37.078201  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/functional-058102/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-310211 node start m02 -v=7 --alsologtostderr: (23.440779781s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-310211 status -v=7 --alsologtostderr: (1.430267165s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (25.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.320337535s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (193.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-310211 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-310211 -v=7 --alsologtostderr
E0919 19:07:18.039534  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/functional-058102/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-310211 -v=7 --alsologtostderr: (37.39501052s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-310211 --wait=true -v=7 --alsologtostderr
E0919 19:08:18.398777  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:08:39.961056  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/functional-058102/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-310211 --wait=true -v=7 --alsologtostderr: (2m36.170272453s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-310211
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (193.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-310211 node delete m03 -v=7 --alsologtostderr: (11.864087211s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 stop -v=7 --alsologtostderr
E0919 19:10:56.100386  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/functional-058102/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-310211 stop -v=7 --alsologtostderr: (35.764433545s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-310211 status -v=7 --alsologtostderr: exit status 7 (114.229825ms)

                                                
                                                
-- stdout --
	ha-310211
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-310211-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-310211-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 19:10:57.783877  353475 out.go:345] Setting OutFile to fd 1 ...
	I0919 19:10:57.784032  353475 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:10:57.784043  353475 out.go:358] Setting ErrFile to fd 2...
	I0919 19:10:57.784049  353475 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:10:57.784325  353475 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-287261/.minikube/bin
	I0919 19:10:57.784511  353475 out.go:352] Setting JSON to false
	I0919 19:10:57.784548  353475 mustload.go:65] Loading cluster: ha-310211
	I0919 19:10:57.784647  353475 notify.go:220] Checking for updates...
	I0919 19:10:57.785027  353475 config.go:182] Loaded profile config "ha-310211": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:10:57.785047  353475 status.go:174] checking status of ha-310211 ...
	I0919 19:10:57.785632  353475 cli_runner.go:164] Run: docker container inspect ha-310211 --format={{.State.Status}}
	I0919 19:10:57.803789  353475 status.go:364] ha-310211 host status = "Stopped" (err=<nil>)
	I0919 19:10:57.803815  353475 status.go:377] host is not running, skipping remaining checks
	I0919 19:10:57.803821  353475 status.go:176] ha-310211 status: &{Name:ha-310211 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 19:10:57.803851  353475 status.go:174] checking status of ha-310211-m02 ...
	I0919 19:10:57.804177  353475 cli_runner.go:164] Run: docker container inspect ha-310211-m02 --format={{.State.Status}}
	I0919 19:10:57.821683  353475 status.go:364] ha-310211-m02 host status = "Stopped" (err=<nil>)
	I0919 19:10:57.821708  353475 status.go:377] host is not running, skipping remaining checks
	I0919 19:10:57.821716  353475 status.go:176] ha-310211-m02 status: &{Name:ha-310211-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 19:10:57.821736  353475 status.go:174] checking status of ha-310211-m04 ...
	I0919 19:10:57.822036  353475 cli_runner.go:164] Run: docker container inspect ha-310211-m04 --format={{.State.Status}}
	I0919 19:10:57.852410  353475 status.go:364] ha-310211-m04 host status = "Stopped" (err=<nil>)
	I0919 19:10:57.852435  353475 status.go:377] host is not running, skipping remaining checks
	I0919 19:10:57.852442  353475 status.go:176] ha-310211-m04 status: &{Name:ha-310211-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (70.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-310211 --control-plane -v=7 --alsologtostderr
E0919 19:13:18.398531  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-310211 --control-plane -v=7 --alsologtostderr: (1m9.491169563s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-310211 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-310211 status -v=7 --alsologtostderr: (1.010365074s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (70.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.010050486s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.01s)

                                                
                                    
x
+
TestJSONOutput/start/Command (46.75s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-804845 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0919 19:14:41.469378  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-804845 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (46.747855062s)
--- PASS: TestJSONOutput/start/Command (46.75s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-804845 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-804845 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.89s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-804845 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-804845 --output=json --user=testUser: (5.88702423s)
--- PASS: TestJSONOutput/stop/Command (5.89s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.28s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-581510 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-581510 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (112.998727ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"249b4041-c1c9-42e0-95fd-31e051134344","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-581510] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9e972644-26ee-4368-858e-72466e5ac694","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19664"}}
	{"specversion":"1.0","id":"5f5ee38b-84cf-4810-976c-4304fda9e971","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3db8e820-bd43-4807-8f4b-61e63cb0c53d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19664-287261/kubeconfig"}}
	{"specversion":"1.0","id":"057a6d2b-4b03-432a-a41a-e4801f7d0527","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-287261/.minikube"}}
	{"specversion":"1.0","id":"3e324374-7d05-402c-b030-01d1fe28e41e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"d2ff689c-b48b-4c3a-93e1-c3373e1f4895","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"78512a42-c2f5-4296-b0a0-2de2757487b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-581510" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-581510
--- PASS: TestErrorJSONOutput (0.28s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (36.26s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-783967 --network=
E0919 19:15:56.095330  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/functional-058102/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-783967 --network=: (34.21861191s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-783967" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-783967
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-783967: (2.020070978s)
--- PASS: TestKicCustomNetwork/create_custom_network (36.26s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.55s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-015563 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-015563 --network=bridge: (34.46994351s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-015563" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-015563
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-015563: (2.053847471s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.55s)

                                                
                                    
x
+
TestKicExistingNetwork (35.67s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0919 19:16:43.037255  292666 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0919 19:16:43.053124  292666 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0919 19:16:43.053210  292666 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0919 19:16:43.053229  292666 cli_runner.go:164] Run: docker network inspect existing-network
W0919 19:16:43.067840  292666 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0919 19:16:43.067877  292666 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0919 19:16:43.067893  292666 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0919 19:16:43.068021  292666 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0919 19:16:43.085791  292666 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c7b00de9cd6d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:a0:74:70:eb} reservation:<nil>}
I0919 19:16:43.086290  292666 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001d4b650}
I0919 19:16:43.086315  292666 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0919 19:16:43.086368  292666 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0919 19:16:43.161550  292666 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-569972 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-569972 --network=existing-network: (33.486337112s)
helpers_test.go:175: Cleaning up "existing-network-569972" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-569972
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-569972: (2.026117546s)
I0919 19:17:18.688547  292666 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (35.67s)

                                                
                                    
x
+
TestKicCustomSubnet (35.64s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-876307 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-876307 --subnet=192.168.60.0/24: (33.514458016s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-876307 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-876307" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-876307
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-876307: (2.108746256s)
--- PASS: TestKicCustomSubnet (35.64s)

                                                
                                    
x
+
TestKicStaticIP (33.97s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-290852 --static-ip=192.168.200.200
E0919 19:18:18.398441  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-290852 --static-ip=192.168.200.200: (31.81665412s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-290852 ip
helpers_test.go:175: Cleaning up "static-ip-290852" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-290852
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-290852: (2.007224919s)
--- PASS: TestKicStaticIP (33.97s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (66.72s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-464561 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-464561 --driver=docker  --container-runtime=crio: (31.041426582s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-466998 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-466998 --driver=docker  --container-runtime=crio: (30.153191575s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-464561
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-466998
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-466998" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-466998
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-466998: (2.034465094s)
helpers_test.go:175: Cleaning up "first-464561" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-464561
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-464561: (1.988127192s)
--- PASS: TestMinikubeProfile (66.72s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.13s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-089268 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-089268 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.132365221s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.13s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-089268 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-091212 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-091212 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.999561301s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.00s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-091212 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-089268 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-089268 --alsologtostderr -v=5: (1.641474514s)
--- PASS: TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-091212 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-091212
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-091212: (1.20270925s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.8s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-091212
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-091212: (6.802174612s)
--- PASS: TestMountStart/serial/RestartStopped (7.80s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-091212 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (107.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-244599 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0919 19:20:56.095585  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/functional-058102/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-244599 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m47.083399448s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-244599 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (107.60s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (7.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-244599 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-244599 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-244599 -- rollout status deployment/busybox: (5.533224589s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-244599 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-244599 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-244599 -- exec busybox-7dff88458-l7fhj -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-244599 -- exec busybox-7dff88458-njkgs -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-244599 -- exec busybox-7dff88458-l7fhj -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-244599 -- exec busybox-7dff88458-njkgs -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-244599 -- exec busybox-7dff88458-l7fhj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-244599 -- exec busybox-7dff88458-njkgs -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (7.35s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-244599 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-244599 -- exec busybox-7dff88458-l7fhj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-244599 -- exec busybox-7dff88458-l7fhj -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-244599 -- exec busybox-7dff88458-njkgs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-244599 -- exec busybox-7dff88458-njkgs -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (28.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-244599 -v 3 --alsologtostderr
E0919 19:22:19.164600  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/functional-058102/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-244599 -v 3 --alsologtostderr: (27.995656074s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-244599 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (28.68s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-244599 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-244599 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-244599 cp testdata/cp-test.txt multinode-244599:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-244599 ssh -n multinode-244599 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-244599 cp multinode-244599:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile105681317/001/cp-test_multinode-244599.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-244599 ssh -n multinode-244599 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-244599 cp multinode-244599:/home/docker/cp-test.txt multinode-244599-m02:/home/docker/cp-test_multinode-244599_multinode-244599-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-244599 ssh -n multinode-244599 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-244599 ssh -n multinode-244599-m02 "sudo cat /home/docker/cp-test_multinode-244599_multinode-244599-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-244599 cp multinode-244599:/home/docker/cp-test.txt multinode-244599-m03:/home/docker/cp-test_multinode-244599_multinode-244599-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-244599 ssh -n multinode-244599 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-244599 ssh -n multinode-244599-m03 "sudo cat /home/docker/cp-test_multinode-244599_multinode-244599-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-244599 cp testdata/cp-test.txt multinode-244599-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-244599 ssh -n multinode-244599-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-244599 cp multinode-244599-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile105681317/001/cp-test_multinode-244599-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-244599 ssh -n multinode-244599-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-244599 cp multinode-244599-m02:/home/docker/cp-test.txt multinode-244599:/home/docker/cp-test_multinode-244599-m02_multinode-244599.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-244599 ssh -n multinode-244599-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-244599 ssh -n multinode-244599 "sudo cat /home/docker/cp-test_multinode-244599-m02_multinode-244599.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-244599 cp multinode-244599-m02:/home/docker/cp-test.txt multinode-244599-m03:/home/docker/cp-test_multinode-244599-m02_multinode-244599-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-244599 ssh -n multinode-244599-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-244599 ssh -n multinode-244599-m03 "sudo cat /home/docker/cp-test_multinode-244599-m02_multinode-244599-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-244599 cp testdata/cp-test.txt multinode-244599-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-244599 ssh -n multinode-244599-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-244599 cp multinode-244599-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile105681317/001/cp-test_multinode-244599-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-244599 ssh -n multinode-244599-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-244599 cp multinode-244599-m03:/home/docker/cp-test.txt multinode-244599:/home/docker/cp-test_multinode-244599-m03_multinode-244599.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-244599 ssh -n multinode-244599-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-244599 ssh -n multinode-244599 "sudo cat /home/docker/cp-test_multinode-244599-m03_multinode-244599.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-244599 cp multinode-244599-m03:/home/docker/cp-test.txt multinode-244599-m02:/home/docker/cp-test_multinode-244599-m03_multinode-244599-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-244599 ssh -n multinode-244599-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-244599 ssh -n multinode-244599-m02 "sudo cat /home/docker/cp-test_multinode-244599-m03_multinode-244599-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.25s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-244599 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-244599 node stop m03: (1.219521005s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-244599 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-244599 status: exit status 7 (566.419145ms)

                                                
                                                
-- stdout --
	multinode-244599
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-244599-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-244599-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-244599 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-244599 status --alsologtostderr: exit status 7 (558.430876ms)

                                                
                                                
-- stdout --
	multinode-244599
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-244599-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-244599-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 19:22:40.341595  407890 out.go:345] Setting OutFile to fd 1 ...
	I0919 19:22:40.341722  407890 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:22:40.341732  407890 out.go:358] Setting ErrFile to fd 2...
	I0919 19:22:40.341737  407890 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:22:40.341993  407890 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-287261/.minikube/bin
	I0919 19:22:40.342186  407890 out.go:352] Setting JSON to false
	I0919 19:22:40.342219  407890 mustload.go:65] Loading cluster: multinode-244599
	I0919 19:22:40.342254  407890 notify.go:220] Checking for updates...
	I0919 19:22:40.342726  407890 config.go:182] Loaded profile config "multinode-244599": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:22:40.343021  407890 status.go:174] checking status of multinode-244599 ...
	I0919 19:22:40.343811  407890 cli_runner.go:164] Run: docker container inspect multinode-244599 --format={{.State.Status}}
	I0919 19:22:40.362519  407890 status.go:364] multinode-244599 host status = "Running" (err=<nil>)
	I0919 19:22:40.362545  407890 host.go:66] Checking if "multinode-244599" exists ...
	I0919 19:22:40.362872  407890 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-244599
	I0919 19:22:40.390683  407890 host.go:66] Checking if "multinode-244599" exists ...
	I0919 19:22:40.390999  407890 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 19:22:40.391059  407890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-244599
	I0919 19:22:40.409352  407890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33268 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/multinode-244599/id_rsa Username:docker}
	I0919 19:22:40.513606  407890 ssh_runner.go:195] Run: systemctl --version
	I0919 19:22:40.518028  407890 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 19:22:40.530254  407890 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 19:22:40.598045  407890 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-19 19:22:40.586802309 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0919 19:22:40.598676  407890 kubeconfig.go:125] found "multinode-244599" server: "https://192.168.67.2:8443"
	I0919 19:22:40.598714  407890 api_server.go:166] Checking apiserver status ...
	I0919 19:22:40.598766  407890 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 19:22:40.610744  407890 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1367/cgroup
	I0919 19:22:40.623394  407890 api_server.go:182] apiserver freezer: "2:freezer:/docker/d792b4c29915c539b7f0712f6b4aae321e2097aa282038ecfb2031236f72aa2c/crio/crio-1e47c3cce11e38caebd2db90b59b0b554773551dc6ead17e6ce484528d60ad1d"
	I0919 19:22:40.623517  407890 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d792b4c29915c539b7f0712f6b4aae321e2097aa282038ecfb2031236f72aa2c/crio/crio-1e47c3cce11e38caebd2db90b59b0b554773551dc6ead17e6ce484528d60ad1d/freezer.state
	I0919 19:22:40.633617  407890 api_server.go:204] freezer state: "THAWED"
	I0919 19:22:40.633646  407890 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0919 19:22:40.641921  407890 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0919 19:22:40.641954  407890 status.go:456] multinode-244599 apiserver status = Running (err=<nil>)
	I0919 19:22:40.641965  407890 status.go:176] multinode-244599 status: &{Name:multinode-244599 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 19:22:40.642012  407890 status.go:174] checking status of multinode-244599-m02 ...
	I0919 19:22:40.642346  407890 cli_runner.go:164] Run: docker container inspect multinode-244599-m02 --format={{.State.Status}}
	I0919 19:22:40.660686  407890 status.go:364] multinode-244599-m02 host status = "Running" (err=<nil>)
	I0919 19:22:40.660711  407890 host.go:66] Checking if "multinode-244599-m02" exists ...
	I0919 19:22:40.661024  407890 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-244599-m02
	I0919 19:22:40.678281  407890 host.go:66] Checking if "multinode-244599-m02" exists ...
	I0919 19:22:40.678605  407890 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 19:22:40.678662  407890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-244599-m02
	I0919 19:22:40.696467  407890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33273 SSHKeyPath:/home/jenkins/minikube-integration/19664-287261/.minikube/machines/multinode-244599-m02/id_rsa Username:docker}
	I0919 19:22:40.798461  407890 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 19:22:40.810380  407890 status.go:176] multinode-244599-m02 status: &{Name:multinode-244599-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0919 19:22:40.810425  407890 status.go:174] checking status of multinode-244599-m03 ...
	I0919 19:22:40.810729  407890 cli_runner.go:164] Run: docker container inspect multinode-244599-m03 --format={{.State.Status}}
	I0919 19:22:40.842124  407890 status.go:364] multinode-244599-m03 host status = "Stopped" (err=<nil>)
	I0919 19:22:40.842146  407890 status.go:377] host is not running, skipping remaining checks
	I0919 19:22:40.842153  407890 status.go:176] multinode-244599-m03 status: &{Name:multinode-244599-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.34s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-244599 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-244599 node start m03 -v=7 --alsologtostderr: (9.566325377s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-244599 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.38s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (110.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-244599
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-244599
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-244599: (24.900776037s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-244599 --wait=true -v=8 --alsologtostderr
E0919 19:23:18.398527  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-244599 --wait=true -v=8 --alsologtostderr: (1m25.545175291s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-244599
--- PASS: TestMultiNode/serial/RestartKeepsNodes (110.57s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-244599 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-244599 node delete m03: (5.06812421s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-244599 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.75s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-244599 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-244599 stop: (23.710035191s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-244599 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-244599 status: exit status 7 (93.529708ms)

                                                
                                                
-- stdout --
	multinode-244599
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-244599-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-244599 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-244599 status --alsologtostderr: exit status 7 (99.590977ms)

                                                
                                                
-- stdout --
	multinode-244599
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-244599-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 19:25:11.400625  415684 out.go:345] Setting OutFile to fd 1 ...
	I0919 19:25:11.400785  415684 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:25:11.400794  415684 out.go:358] Setting ErrFile to fd 2...
	I0919 19:25:11.400801  415684 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:25:11.401093  415684 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-287261/.minikube/bin
	I0919 19:25:11.401294  415684 out.go:352] Setting JSON to false
	I0919 19:25:11.401338  415684 mustload.go:65] Loading cluster: multinode-244599
	I0919 19:25:11.401413  415684 notify.go:220] Checking for updates...
	I0919 19:25:11.402639  415684 config.go:182] Loaded profile config "multinode-244599": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:25:11.402676  415684 status.go:174] checking status of multinode-244599 ...
	I0919 19:25:11.403478  415684 cli_runner.go:164] Run: docker container inspect multinode-244599 --format={{.State.Status}}
	I0919 19:25:11.421483  415684 status.go:364] multinode-244599 host status = "Stopped" (err=<nil>)
	I0919 19:25:11.421508  415684 status.go:377] host is not running, skipping remaining checks
	I0919 19:25:11.421516  415684 status.go:176] multinode-244599 status: &{Name:multinode-244599 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 19:25:11.421546  415684 status.go:174] checking status of multinode-244599-m02 ...
	I0919 19:25:11.421934  415684 cli_runner.go:164] Run: docker container inspect multinode-244599-m02 --format={{.State.Status}}
	I0919 19:25:11.447671  415684 status.go:364] multinode-244599-m02 host status = "Stopped" (err=<nil>)
	I0919 19:25:11.447695  415684 status.go:377] host is not running, skipping remaining checks
	I0919 19:25:11.447703  415684 status.go:176] multinode-244599-m02 status: &{Name:multinode-244599-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.90s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (49.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-244599 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0919 19:25:56.094979  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/functional-058102/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-244599 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (48.980678979s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-244599 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (49.94s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-244599
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-244599-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-244599-m02 --driver=docker  --container-runtime=crio: exit status 14 (78.93964ms)

                                                
                                                
-- stdout --
	* [multinode-244599-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19664-287261/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-287261/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-244599-m02' is duplicated with machine name 'multinode-244599-m02' in profile 'multinode-244599'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-244599-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-244599-m03 --driver=docker  --container-runtime=crio: (33.288760312s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-244599
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-244599: exit status 80 (342.245978ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-244599 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-244599-m03 already exists in multinode-244599-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-244599-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-244599-m03: (1.950642484s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.71s)

                                                
                                    
x
+
TestPreload (136.6s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-010257 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0919 19:28:18.399030  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-010257 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m42.837006286s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-010257 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-010257 image pull gcr.io/k8s-minikube/busybox: (3.504482261s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-010257
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-010257: (5.792373102s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-010257 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-010257 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (21.688254168s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-010257 image list
helpers_test.go:175: Cleaning up "test-preload-010257" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-010257
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-010257: (2.414212925s)
--- PASS: TestPreload (136.60s)

                                                
                                    
x
+
TestScheduledStopUnix (104.91s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-401181 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-401181 --memory=2048 --driver=docker  --container-runtime=crio: (28.639519405s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-401181 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-401181 -n scheduled-stop-401181
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-401181 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0919 19:29:26.874376  292666 retry.go:31] will retry after 95.345µs: open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/scheduled-stop-401181/pid: no such file or directory
I0919 19:29:26.874536  292666 retry.go:31] will retry after 120.698µs: open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/scheduled-stop-401181/pid: no such file or directory
I0919 19:29:26.875681  292666 retry.go:31] will retry after 254.551µs: open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/scheduled-stop-401181/pid: no such file or directory
I0919 19:29:26.876860  292666 retry.go:31] will retry after 250.124µs: open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/scheduled-stop-401181/pid: no such file or directory
I0919 19:29:26.878043  292666 retry.go:31] will retry after 357.515µs: open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/scheduled-stop-401181/pid: no such file or directory
I0919 19:29:26.879213  292666 retry.go:31] will retry after 708.453µs: open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/scheduled-stop-401181/pid: no such file or directory
I0919 19:29:26.880577  292666 retry.go:31] will retry after 699.484µs: open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/scheduled-stop-401181/pid: no such file or directory
I0919 19:29:26.881777  292666 retry.go:31] will retry after 1.24819ms: open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/scheduled-stop-401181/pid: no such file or directory
I0919 19:29:26.883193  292666 retry.go:31] will retry after 3.172081ms: open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/scheduled-stop-401181/pid: no such file or directory
I0919 19:29:26.887512  292666 retry.go:31] will retry after 4.350795ms: open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/scheduled-stop-401181/pid: no such file or directory
I0919 19:29:26.892994  292666 retry.go:31] will retry after 8.549967ms: open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/scheduled-stop-401181/pid: no such file or directory
I0919 19:29:26.902217  292666 retry.go:31] will retry after 11.790406ms: open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/scheduled-stop-401181/pid: no such file or directory
I0919 19:29:26.914477  292666 retry.go:31] will retry after 15.605689ms: open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/scheduled-stop-401181/pid: no such file or directory
I0919 19:29:26.930875  292666 retry.go:31] will retry after 23.242897ms: open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/scheduled-stop-401181/pid: no such file or directory
I0919 19:29:26.955113  292666 retry.go:31] will retry after 39.153261ms: open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/scheduled-stop-401181/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-401181 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-401181 -n scheduled-stop-401181
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-401181
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-401181 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-401181
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-401181: exit status 7 (66.663984ms)

                                                
                                                
-- stdout --
	scheduled-stop-401181
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-401181 -n scheduled-stop-401181
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-401181 -n scheduled-stop-401181: exit status 7 (69.424067ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-401181" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-401181
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-401181: (4.696678895s)
--- PASS: TestScheduledStopUnix (104.91s)

                                                
                                    
x
+
TestInsufficientStorage (10.97s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-636953 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-636953 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.488495252s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5ae58ae6-2bb9-4dda-a922-aa6e79ec9373","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-636953] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1ca4b687-9813-4c07-bbe6-a7b72dcb66ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19664"}}
	{"specversion":"1.0","id":"789ac5af-16e9-4759-9c85-54d84f7cc9e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a7d1edb9-e54f-4ac0-a113-b79bfbcf7dce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19664-287261/kubeconfig"}}
	{"specversion":"1.0","id":"6c6b530c-4525-4423-962c-4a49ee9337d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-287261/.minikube"}}
	{"specversion":"1.0","id":"6a7930c0-bf58-412f-863e-27b9fa29b645","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"8925a5b6-84d4-4512-b0f2-da9eab83b2ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7770b146-19b0-4299-a01c-b2bb8abb3d38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"c5749d59-1616-4bdf-93bb-959fd94ed274","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"147eda1f-bc2d-4ffa-bd10-42635aa7208d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"db488bfc-7608-49d1-955e-e91d90dd2d48","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"317ee2fc-1f3f-4c3f-a586-202e5faaa45d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-636953\" primary control-plane node in \"insufficient-storage-636953\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a0532669-adeb-461d-967f-2e12f9079c10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726589491-19662 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"950b1071-29c5-40a6-98a2-ba1adec9eb1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"97b990c6-667d-4947-bd0a-df589e5b0bb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-636953 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-636953 --output=json --layout=cluster: exit status 7 (293.961632ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-636953","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-636953","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0919 19:30:51.405640  433379 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-636953" does not appear in /home/jenkins/minikube-integration/19664-287261/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-636953 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-636953 --output=json --layout=cluster: exit status 7 (304.062242ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-636953","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-636953","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0919 19:30:51.712703  433441 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-636953" does not appear in /home/jenkins/minikube-integration/19664-287261/kubeconfig
	E0919 19:30:51.722998  433441 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/insufficient-storage-636953/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-636953" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-636953
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-636953: (1.880513963s)
--- PASS: TestInsufficientStorage (10.97s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (64.86s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.113476306 start -p running-upgrade-270518 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.113476306 start -p running-upgrade-270518 --memory=2200 --vm-driver=docker  --container-runtime=crio: (34.275109774s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-270518 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0919 19:35:56.095347  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/functional-058102/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-270518 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (26.250893842s)
helpers_test.go:175: Cleaning up "running-upgrade-270518" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-270518
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-270518: (3.514145867s)
--- PASS: TestRunningBinaryUpgrade (64.86s)

                                                
                                    
x
+
TestKubernetesUpgrade (394.19s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-078517 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-078517 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m10.834298906s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-078517
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-078517: (2.051913112s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-078517 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-078517 status --format={{.Host}}: exit status 7 (95.599122ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-078517 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0919 19:33:18.398470  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-078517 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m41.211619224s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-078517 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-078517 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-078517 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (130.234027ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-078517] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19664-287261/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-287261/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-078517
	    minikube start -p kubernetes-upgrade-078517 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0785172 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-078517 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-078517 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-078517 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (37.190709667s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-078517" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-078517
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-078517: (2.560080589s)
--- PASS: TestKubernetesUpgrade (394.19s)

                                                
                                    
x
+
TestMissingContainerUpgrade (165.39s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1121466776 start -p missing-upgrade-297774 --memory=2200 --driver=docker  --container-runtime=crio
E0919 19:30:56.095316  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/functional-058102/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:31:21.473561  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1121466776 start -p missing-upgrade-297774 --memory=2200 --driver=docker  --container-runtime=crio: (1m24.415092837s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-297774
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-297774: (10.43284143s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-297774
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-297774 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-297774 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m7.161454561s)
helpers_test.go:175: Cleaning up "missing-upgrade-297774" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-297774
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-297774: (2.354861455s)
--- PASS: TestMissingContainerUpgrade (165.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-651694 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-651694 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (76.587462ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-651694] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19664-287261/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-287261/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (38.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-651694 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-651694 --driver=docker  --container-runtime=crio: (37.712742044s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-651694 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (38.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (9.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-651694 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-651694 --no-kubernetes --driver=docker  --container-runtime=crio: (6.819151337s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-651694 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-651694 status -o json: exit status 2 (298.964381ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-651694","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-651694
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-651694: (1.936844222s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (9.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-651694 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-651694 --no-kubernetes --driver=docker  --container-runtime=crio: (7.855784913s)
--- PASS: TestNoKubernetes/serial/Start (7.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-651694 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-651694 "sudo systemctl is-active --quiet service kubelet": exit status 1 (419.185229ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-arm64 profile list: (2.151506928s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (2.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-651694
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-651694: (1.292242581s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-651694 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-651694 --driver=docker  --container-runtime=crio: (7.7862046s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-651694 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-651694 "sudo systemctl is-active --quiet service kubelet": exit status 1 (345.956536ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.83s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (72.29s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3422576444 start -p stopped-upgrade-256849 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3422576444 start -p stopped-upgrade-256849 --memory=2200 --vm-driver=docker  --container-runtime=crio: (34.453783015s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3422576444 -p stopped-upgrade-256849 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3422576444 -p stopped-upgrade-256849 stop: (2.72712583s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-256849 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-256849 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.110625332s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (72.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.51s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-256849
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-256849: (1.509101007s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.51s)

                                                
                                    
x
+
TestPause/serial/Start (48.8s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-972111 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-972111 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (48.794897643s)
--- PASS: TestPause/serial/Start (48.80s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (22.1s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-972111 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-972111 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.070386051s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (22.10s)

                                                
                                    
x
+
TestPause/serial/Pause (0.82s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-972111 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.82s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.39s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-972111 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-972111 --output=json --layout=cluster: exit status 2 (390.457121ms)

                                                
                                                
-- stdout --
	{"Name":"pause-972111","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-972111","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.39s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.78s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-972111 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.78s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.88s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-972111 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.88s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.76s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-972111 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-972111 --alsologtostderr -v=5: (2.762911293s)
--- PASS: TestPause/serial/DeletePaused (2.76s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.39s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-972111
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-972111: exit status 1 (16.010417ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-972111: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-206384 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-206384 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (258.299308ms)

                                                
                                                
-- stdout --
	* [false-206384] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19664-287261/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-287261/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 19:38:03.511927  473120 out.go:345] Setting OutFile to fd 1 ...
	I0919 19:38:03.512423  473120 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:38:03.512440  473120 out.go:358] Setting ErrFile to fd 2...
	I0919 19:38:03.512446  473120 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:38:03.512735  473120 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-287261/.minikube/bin
	I0919 19:38:03.513217  473120 out.go:352] Setting JSON to false
	I0919 19:38:03.514215  473120 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12016,"bootTime":1726762668,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0919 19:38:03.514294  473120 start.go:139] virtualization:  
	I0919 19:38:03.523732  473120 out.go:177] * [false-206384] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0919 19:38:03.530808  473120 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 19:38:03.530918  473120 notify.go:220] Checking for updates...
	I0919 19:38:03.536376  473120 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 19:38:03.538380  473120 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19664-287261/kubeconfig
	I0919 19:38:03.540473  473120 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-287261/.minikube
	I0919 19:38:03.542730  473120 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0919 19:38:03.545294  473120 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 19:38:03.548408  473120 config.go:182] Loaded profile config "kubernetes-upgrade-078517": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:38:03.548551  473120 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 19:38:03.595644  473120 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0919 19:38:03.595782  473120 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 19:38:03.691553  473120 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:62 SystemTime:2024-09-19 19:38:03.679168334 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0919 19:38:03.691672  473120 docker.go:318] overlay module found
	I0919 19:38:03.693739  473120 out.go:177] * Using the docker driver based on user configuration
	I0919 19:38:03.695611  473120 start.go:297] selected driver: docker
	I0919 19:38:03.695648  473120 start.go:901] validating driver "docker" against <nil>
	I0919 19:38:03.695667  473120 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 19:38:03.699065  473120 out.go:201] 
	W0919 19:38:03.700880  473120 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0919 19:38:03.702995  473120 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-206384 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-206384

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-206384

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-206384

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-206384

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-206384

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-206384

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-206384

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-206384

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-206384

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-206384

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206384"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206384"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206384"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-206384

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206384"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206384"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-206384" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-206384" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-206384" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-206384" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-206384" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-206384" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-206384" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-206384" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206384"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206384"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206384"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206384"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206384"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-206384" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-206384" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-206384" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206384"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206384"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206384"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206384"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206384"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19664-287261/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 19 Sep 2024 19:38:04 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-078517
contexts:
- context:
cluster: kubernetes-upgrade-078517
extensions:
- extension:
last-update: Thu, 19 Sep 2024 19:38:04 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: kubernetes-upgrade-078517
name: kubernetes-upgrade-078517
current-context: kubernetes-upgrade-078517
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-078517
user:
client-certificate: /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/kubernetes-upgrade-078517/client.crt
client-key: /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/kubernetes-upgrade-078517/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-206384

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206384"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206384"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206384"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206384"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206384"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206384"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206384"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206384"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206384"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206384"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206384"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206384"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206384"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206384"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206384"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206384"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206384"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-206384"

                                                
                                                
----------------------- debugLogs end: false-206384 [took: 3.843553439s] --------------------------------
helpers_test.go:175: Cleaning up "false-206384" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-206384
--- PASS: TestNetworkPlugins/group/false (4.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (160.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-801534 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0919 19:40:56.095427  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/functional-058102/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-801534 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m40.268199185s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (160.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-801534 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f5d19292-2ef0-43c2-8173-8c94b5b6f4c0] Pending
helpers_test.go:344: "busybox" [f5d19292-2ef0-43c2-8173-8c94b5b6f4c0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f5d19292-2ef0-43c2-8173-8c94b5b6f4c0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003027489s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-801534 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-801534 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-801534 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.440428135s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-801534 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-801534 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-801534 --alsologtostderr -v=3: (12.065676266s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-801534 -n old-k8s-version-801534
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-801534 -n old-k8s-version-801534: exit status 7 (72.921259ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-801534 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (131.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-801534 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-801534 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m10.621304542s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-801534 -n old-k8s-version-801534
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (131.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (73.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-064462 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0919 19:43:18.398811  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-064462 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m13.432685723s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (73.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-064462 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [80ef12af-2eb4-4807-8937-4e3afd41672a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [80ef12af-2eb4-4807-8937-4e3afd41672a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.003968492s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-064462 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-064462 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-064462 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.021491482s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-064462 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-064462 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-064462 --alsologtostderr -v=3: (12.080590817s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-064462 -n no-preload-064462
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-064462 -n no-preload-064462: exit status 7 (68.125986ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-064462 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (295.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-064462 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-064462 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m55.142078405s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-064462 -n no-preload-064462
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (295.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-hd2v9" [08ab8c17-dfbc-42cb-b658-0225d06edfd5] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003566771s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-hd2v9" [08ab8c17-dfbc-42cb-b658-0225d06edfd5] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003869935s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-801534 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-801534 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-801534 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-801534 --alsologtostderr -v=1: (1.227693834s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-801534 -n old-k8s-version-801534
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-801534 -n old-k8s-version-801534: exit status 2 (460.07011ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-801534 -n old-k8s-version-801534
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-801534 -n old-k8s-version-801534: exit status 2 (434.455675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-801534 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-801534 --alsologtostderr -v=1: (1.581165315s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-801534 -n old-k8s-version-801534
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-801534 -n old-k8s-version-801534
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (78.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-135670 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0919 19:45:56.095395  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/functional-058102/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-135670 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m18.76595715s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (78.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-135670 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [aa74e2d5-9d74-4f08-b656-95f5ac954d18] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [aa74e2d5-9d74-4f08-b656-95f5ac954d18] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004186122s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-135670 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-135670 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-135670 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.02297899s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-135670 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-135670 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-135670 --alsologtostderr -v=3: (11.992506087s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-135670 -n embed-certs-135670
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-135670 -n embed-certs-135670: exit status 7 (77.715813ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-135670 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (276.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-135670 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0919 19:47:09.625612  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/old-k8s-version-801534/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:47:09.631959  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/old-k8s-version-801534/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:47:09.643324  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/old-k8s-version-801534/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:47:09.664670  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/old-k8s-version-801534/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:47:09.706009  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/old-k8s-version-801534/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:47:09.787342  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/old-k8s-version-801534/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:47:09.948623  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/old-k8s-version-801534/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:47:10.270864  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/old-k8s-version-801534/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:47:10.912834  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/old-k8s-version-801534/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:47:12.194320  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/old-k8s-version-801534/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:47:14.756194  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/old-k8s-version-801534/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:47:19.877505  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/old-k8s-version-801534/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:47:30.120500  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/old-k8s-version-801534/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:47:50.602260  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/old-k8s-version-801534/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:48:01.474813  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:48:18.398500  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:48:31.563727  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/old-k8s-version-801534/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-135670 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m36.222297266s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-135670 -n embed-certs-135670
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (276.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-t55vd" [3ee36bea-a94e-47ed-8a97-4bfff8d67941] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003922824s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-t55vd" [3ee36bea-a94e-47ed-8a97-4bfff8d67941] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004242843s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-064462 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-064462 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-064462 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-064462 -n no-preload-064462
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-064462 -n no-preload-064462: exit status 2 (362.3737ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-064462 -n no-preload-064462
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-064462 -n no-preload-064462: exit status 2 (340.091724ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-064462 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-064462 -n no-preload-064462
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-064462 -n no-preload-064462
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (53.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-330596 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0919 19:49:53.485470  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/old-k8s-version-801534/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-330596 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (53.516641845s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (53.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-330596 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6cbf83c4-a04e-4356-b4f1-71859681294a] Pending
helpers_test.go:344: "busybox" [6cbf83c4-a04e-4356-b4f1-71859681294a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6cbf83c4-a04e-4356-b4f1-71859681294a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.00558397s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-330596 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-330596 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-330596 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.039279848s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-330596 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-330596 --alsologtostderr -v=3
E0919 19:50:56.095798  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/functional-058102/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-330596 --alsologtostderr -v=3: (12.022152846s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-330596 -n default-k8s-diff-port-330596
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-330596 -n default-k8s-diff-port-330596: exit status 7 (75.176548ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-330596 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (266.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-330596 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-330596 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m26.456341167s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-330596 -n default-k8s-diff-port-330596
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (266.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-hpqzl" [2ec81a34-94f8-423e-a539-641a9a85acb0] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003481462s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-hpqzl" [2ec81a34-94f8-423e-a539-641a9a85acb0] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003914575s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-135670 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-135670 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-135670 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-135670 -n embed-certs-135670
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-135670 -n embed-certs-135670: exit status 2 (325.667819ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-135670 -n embed-certs-135670
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-135670 -n embed-certs-135670: exit status 2 (337.285253ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-135670 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-135670 -n embed-certs-135670
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-135670 -n embed-certs-135670
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (34.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-478563 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0919 19:52:09.625072  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/old-k8s-version-801534/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-478563 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (34.785437452s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (34.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-478563 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-478563 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.194101571s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-478563 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-478563 --alsologtostderr -v=3: (1.274421108s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-478563 -n newest-cni-478563
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-478563 -n newest-cni-478563: exit status 7 (67.906287ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-478563 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-478563 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-478563 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (15.087113806s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-478563 -n newest-cni-478563
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-478563 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-478563 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-478563 -n newest-cni-478563
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-478563 -n newest-cni-478563: exit status 2 (348.39476ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-478563 -n newest-cni-478563
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-478563 -n newest-cni-478563: exit status 2 (334.556608ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-478563 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-478563 -n newest-cni-478563
E0919 19:52:37.326987  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/old-k8s-version-801534/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-478563 -n newest-cni-478563
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (75.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-206384 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0919 19:53:18.398428  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-206384 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m15.776627035s)
--- PASS: TestNetworkPlugins/group/auto/Start (75.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-206384 "pgrep -a kubelet"
I0919 19:53:56.575138  292666 config.go:182] Loaded profile config "auto-206384": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-206384 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-knx4w" [38f0a463-15ba-41ff-a746-4b445085705c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-knx4w" [38f0a463-15ba-41ff-a746-4b445085705c] Running
E0919 19:54:04.534973  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/no-preload-064462/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:54:04.541488  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/no-preload-064462/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:54:04.552840  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/no-preload-064462/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:54:04.574309  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/no-preload-064462/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:54:04.615837  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/no-preload-064462/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:54:04.697243  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/no-preload-064462/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:54:04.858844  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/no-preload-064462/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:54:05.180715  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/no-preload-064462/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:54:05.822942  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/no-preload-064462/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:54:07.105044  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/no-preload-064462/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004702957s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-206384 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-206384 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-206384 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (82.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-206384 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0919 19:54:45.511798  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/no-preload-064462/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-206384 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m22.330330565s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (82.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-9rwmg" [75198641-d06d-41ca-8cd2-b42ed6f840fe] Running
E0919 19:55:26.473533  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/no-preload-064462/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00384736s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-9rwmg" [75198641-d06d-41ca-8cd2-b42ed6f840fe] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004288316s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-330596 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-330596 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-330596 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-330596 -n default-k8s-diff-port-330596
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-330596 -n default-k8s-diff-port-330596: exit status 2 (348.983562ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-330596 -n default-k8s-diff-port-330596
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-330596 -n default-k8s-diff-port-330596: exit status 2 (327.659464ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-330596 --alsologtostderr -v=1
E0919 19:55:39.168191  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/functional-058102/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-330596 -n default-k8s-diff-port-330596
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-330596 -n default-k8s-diff-port-330596
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.12s)
E0919 20:00:18.772657  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/auto-206384/client.crt: no such file or directory" logger="UnhandledError"
E0919 20:00:36.101595  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/default-k8s-diff-port-330596/client.crt: no such file or directory" logger="UnhandledError"
E0919 20:00:36.108063  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/default-k8s-diff-port-330596/client.crt: no such file or directory" logger="UnhandledError"
E0919 20:00:36.119633  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/default-k8s-diff-port-330596/client.crt: no such file or directory" logger="UnhandledError"
E0919 20:00:36.141083  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/default-k8s-diff-port-330596/client.crt: no such file or directory" logger="UnhandledError"
E0919 20:00:36.182534  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/default-k8s-diff-port-330596/client.crt: no such file or directory" logger="UnhandledError"
E0919 20:00:36.264026  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/default-k8s-diff-port-330596/client.crt: no such file or directory" logger="UnhandledError"
E0919 20:00:36.425538  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/default-k8s-diff-port-330596/client.crt: no such file or directory" logger="UnhandledError"
E0919 20:00:36.747203  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/default-k8s-diff-port-330596/client.crt: no such file or directory" logger="UnhandledError"
E0919 20:00:37.389160  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/default-k8s-diff-port-330596/client.crt: no such file or directory" logger="UnhandledError"
E0919 20:00:38.671476  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/default-k8s-diff-port-330596/client.crt: no such file or directory" logger="UnhandledError"
E0919 20:00:41.233482  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/default-k8s-diff-port-330596/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (68.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-206384 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-206384 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m8.076123384s)
--- PASS: TestNetworkPlugins/group/calico/Start (68.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-wzrjf" [da12441e-513b-40fb-bcc8-bbbbde3d366a] Running
E0919 19:55:56.095390  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/functional-058102/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005130671s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-206384 "pgrep -a kubelet"
I0919 19:55:57.625482  292666 config.go:182] Loaded profile config "kindnet-206384": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-206384 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-rk6vk" [d6387693-90f3-4582-a03b-82bc6f7da7ab] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-rk6vk" [d6387693-90f3-4582-a03b-82bc6f7da7ab] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.01483457s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-206384 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-206384 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-206384 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (62.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-206384 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0919 19:56:48.395040  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/no-preload-064462/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-206384 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m2.432287787s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (62.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-dm9zl" [538be2ab-f3f6-4cd1-9576-32906ec33898] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.008062324s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-206384 "pgrep -a kubelet"
I0919 19:56:57.452966  292666 config.go:182] Loaded profile config "calico-206384": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-206384 replace --force -f testdata/netcat-deployment.yaml
I0919 19:56:57.785062  292666 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-l66q5" [c560a7a3-a029-4cd4-8366-c07cea8cb47c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-l66q5" [c560a7a3-a029-4cd4-8366-c07cea8cb47c] Running
E0919 19:57:09.625632  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/old-k8s-version-801534/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 14.004122463s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-206384 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-206384 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-206384 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (77.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-206384 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-206384 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m17.625512726s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (77.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-206384 "pgrep -a kubelet"
I0919 19:57:37.182631  292666 config.go:182] Loaded profile config "custom-flannel-206384": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-206384 replace --force -f testdata/netcat-deployment.yaml
I0919 19:57:37.497924  292666 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-xcvrh" [a9eada47-ecaa-4734-a993-11366c225cd9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-xcvrh" [a9eada47-ecaa-4734-a993-11366c225cd9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.005581648s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-206384 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-206384 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-206384 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (53.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-206384 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0919 19:58:18.398255  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/addons-971880/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-206384 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (53.932288987s)
--- PASS: TestNetworkPlugins/group/flannel/Start (53.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-206384 "pgrep -a kubelet"
I0919 19:58:53.471773  292666 config.go:182] Loaded profile config "enable-default-cni-206384": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-206384 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-wddvl" [ad9c1d93-bd0a-445e-87a0-8e482a0dc7fc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0919 19:58:56.834602  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/auto-206384/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:58:56.841274  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/auto-206384/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:58:56.853027  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/auto-206384/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:58:56.874777  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/auto-206384/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:58:56.916674  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/auto-206384/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:58:56.998090  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/auto-206384/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:58:57.159485  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/auto-206384/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:58:57.481103  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/auto-206384/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:58:58.123274  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/auto-206384/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-wddvl" [ad9c1d93-bd0a-445e-87a0-8e482a0dc7fc] Running
E0919 19:58:59.404649  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/auto-206384/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:59:01.966088  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/auto-206384/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:59:04.534911  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/no-preload-064462/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003526647s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-206384 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-206384 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-206384 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-7l5jc" [3737510f-98eb-42af-9d30-d32aa0b9dcd9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005113232s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-206384 "pgrep -a kubelet"
I0919 19:59:16.243363  292666 config.go:182] Loaded profile config "flannel-206384": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-206384 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-kq752" [926bb06c-e6dc-44a3-a512-1830b299947e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0919 19:59:17.329735  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/auto-206384/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-kq752" [926bb06c-e6dc-44a3-a512-1830b299947e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.009800196s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (77.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-206384 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-206384 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m17.316280399s)
--- PASS: TestNetworkPlugins/group/bridge/Start (77.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-206384 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-206384 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-206384 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-206384 "pgrep -a kubelet"
I0919 20:00:44.445806  292666 config.go:182] Loaded profile config "bridge-206384": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-206384 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mwzv8" [902ebceb-011e-4a36-a167-2b9bf3b574c3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0919 20:00:46.355893  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/default-k8s-diff-port-330596/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-mwzv8" [902ebceb-011e-4a36-a167-2b9bf3b574c3] Running
E0919 20:00:51.206265  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/kindnet-206384/client.crt: no such file or directory" logger="UnhandledError"
E0919 20:00:51.212706  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/kindnet-206384/client.crt: no such file or directory" logger="UnhandledError"
E0919 20:00:51.224136  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/kindnet-206384/client.crt: no such file or directory" logger="UnhandledError"
E0919 20:00:51.245541  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/kindnet-206384/client.crt: no such file or directory" logger="UnhandledError"
E0919 20:00:51.287848  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/kindnet-206384/client.crt: no such file or directory" logger="UnhandledError"
E0919 20:00:51.369350  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/kindnet-206384/client.crt: no such file or directory" logger="UnhandledError"
E0919 20:00:51.530947  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/kindnet-206384/client.crt: no such file or directory" logger="UnhandledError"
E0919 20:00:51.852843  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/kindnet-206384/client.crt: no such file or directory" logger="UnhandledError"
E0919 20:00:52.494959  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/kindnet-206384/client.crt: no such file or directory" logger="UnhandledError"
E0919 20:00:53.776496  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/kindnet-206384/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004229922s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-206384 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-206384 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-206384 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0919 20:00:56.095157  292666 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/functional-058102/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (30/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.57s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-592744 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-592744" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-592744
--- SKIP: TestDownloadOnlyKic (0.57s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-179829" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-179829
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-206384 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-206384

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-206384

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-206384

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-206384

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-206384

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-206384

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-206384

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-206384

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-206384

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-206384

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206384"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206384"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206384"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-206384

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206384"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206384"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-206384" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-206384" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-206384" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-206384" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-206384" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-206384" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-206384" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-206384" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206384"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206384"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206384"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206384"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206384"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-206384" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-206384" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-206384" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206384"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206384"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206384"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206384"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206384"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19664-287261/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 19 Sep 2024 19:37:56 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-078517
contexts:
- context:
cluster: kubernetes-upgrade-078517
extensions:
- extension:
last-update: Thu, 19 Sep 2024 19:37:56 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: kubernetes-upgrade-078517
name: kubernetes-upgrade-078517
current-context: kubernetes-upgrade-078517
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-078517
user:
client-certificate: /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/kubernetes-upgrade-078517/client.crt
client-key: /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/kubernetes-upgrade-078517/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-206384

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206384"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206384"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206384"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206384"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206384"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206384"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206384"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206384"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206384"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206384"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206384"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206384"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206384"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206384"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206384"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206384"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206384"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-206384"

                                                
                                                
----------------------- debugLogs end: kubenet-206384 [took: 5.128922966s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-206384" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-206384
--- SKIP: TestNetworkPlugins/group/kubenet (5.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-206384 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-206384

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-206384

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-206384

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-206384

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-206384

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-206384

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-206384

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-206384

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-206384

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-206384

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206384"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206384"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206384"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-206384

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206384"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206384"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-206384" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-206384" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-206384" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-206384" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-206384" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-206384" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-206384" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-206384" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206384"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206384"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206384"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206384"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206384"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-206384

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-206384

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-206384" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-206384" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-206384

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-206384

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-206384" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-206384" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-206384" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-206384" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-206384" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206384"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206384"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206384"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206384"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206384"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19664-287261/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 19 Sep 2024 19:38:04 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-078517
contexts:
- context:
cluster: kubernetes-upgrade-078517
extensions:
- extension:
last-update: Thu, 19 Sep 2024 19:38:04 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: kubernetes-upgrade-078517
name: kubernetes-upgrade-078517
current-context: kubernetes-upgrade-078517
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-078517
user:
client-certificate: /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/kubernetes-upgrade-078517/client.crt
client-key: /home/jenkins/minikube-integration/19664-287261/.minikube/profiles/kubernetes-upgrade-078517/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-206384

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206384"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206384"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206384"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206384"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206384"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206384"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206384"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206384"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206384"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206384"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206384"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206384"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206384"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206384"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206384"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206384"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206384"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-206384" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-206384"

                                                
                                                
----------------------- debugLogs end: cilium-206384 [took: 3.851749535s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-206384" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-206384
--- SKIP: TestNetworkPlugins/group/cilium (4.01s)

                                                
                                    
Copied to clipboard